AI/ML Rel-18

 RAN1#109-e

9.2       Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface

Please refer to RP-213599 for detailed scope of the SI.

 

R1-2205695        Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface)            Ad-hoc Chair (CMCC)    (rev of R1-2205572)

 

R1-2205021        Work plan for Rel-18 SI on AI and ML for NR air interface             Qualcomm Incorporated

 

R1-2205022        TR skeleton for Rel-18 SI on AI and ML for NR air interface           Qualcomm Incorporated

[109-e-R18-AI/ML-01] – Juan (Qualcomm)

Email discussion and approval of TR skeleton for Rel-18 SI on AI/ML for NR air interface by May 13

R1-2205478        [109-e-R18-AI/ML-01] Email discussion and approval of TR skeleton for Rel-18 SI on AI/ML for NR air interface Moderator (Qualcomm Incorporated)

R1-2205476        TR 38.843 skeleton for Rel-18 SI on AI and ML for NR air interface               Qualcomm Incorporated

Decision: As per email decision posted on May 22nd, the revised skeleton in R1-2205478 is still not stable. Discussion to continue in next meeting.

9.2.1        General aspects of AI/ML framework

Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.

 

R1-2203280        General aspects of AI PHY framework     Ericsson

·        Proposal 1: In the study, prioritize AI/ML model generation under the assumption that AI/ML algorithms are trained offline.

·        Proposal 2: In this study item, synthetic datasets (based on TR.38.901 and TR38.857) are used including the possible option of spatial consistency for at least the beam management and positioning use cases.

·        Proposal 3: RAN1 does not pursue any work in this SI aiming at agreeing AI-baseline models for calibration.

·        Proposal 4: The use of a standard-transparent AI models to enhance performance can be done on a per-company basis and can be used as a reference for comparison. If such results are included, then they should be shared as any other results with respect to e.g. AI model description and KPIs.

·        Proposal 5: Study the following three collaboration cases:

o   Single-sided ML functionality at the gNB/NW only,

o   Single-sided ML functionality at the UE only,

o   Dual-sided joint ML functionality at both the UE and gNB/NW (joint operation).

·        Proposal 6: Study a multi-vendor framework including procedures and signalling for enabling dual-sided joint ML that ensures a single ML-model on the UE side that is independent from the gNB ML-model and a single ML model on the gNB side that is independent from the UE ML-model.

·        Proposal 7: At least for single sided ML models, the model training is assumed to be proprietary, hence no specification impact is foreseen for ML model training

·        Proposal 8: Study options for training collaboration of the dual-side joint AI, using Case A, B, C and D outlined above as a starting point, including at least feasibility of model adoption from external vendor and joint training frameworks in multi-vendor setups

·        Proposal 9: Study in particular the combinatorial problem for dual sided joint AI (Case C), to limit or remove the issue with having to implement/train one ML model for each collaborating vendor/device model/ML model version

·        Proposal 10: Study mechanism and signalling to enable to network to ensure performance of the ML-functionality in the UE, in case of single-sided ML-functionality in the UE and the UE part of the dual-sided joint ML functionality.

·        Proposal 11: For use case solutions with single-sided ML models at the gNB, studies are primarily within UE assistance in data collection for transparent LCM to the UE.

·        Proposal 12: Study per use case solution how network LCM assistance can be introduced to assess network performance impacts due to drifts in ML model operations at the UE side.

·        Proposal 13: For use cases solutions with dual-sided joint ML, focus is in this study on solutions that avoid standardization of the model deployment stage for updating a model.

·        Proposal 14: Study the need from the lower-layer perspective for improved UE capability reporting for conveying ML model-related information in PHY use cases, including enabling ML model version info provision and handling.

·        Proposal 15: In the study, deployed ML-capable UEs that support model update mechanism should be considered.

·        Proposal 16: Study adaptability measures to ensure UE ML model robustness across deployment scenarios in the three selected use cases.

Decision: The document is noted.

 

R1-2204570        ML terminology, descriptions, and collaboration framework            Nokia, Nokia Shanghai Bell

·        Proposal 1: RAN1 maintains a list of ML-related terms and definitions. Terminology in Annex A could be used as a starting point.

·        Proposal 2: RAN1 agrees that the terms used in this study are valid only for the air interface, at the final stage, some adjustments in terminology may be needed with other 3GPP groups.

·        Proposal 3: RAN1 at least to differentiate RL-based algorithms from other types of ML algorithms. 

·        Proposal 4: RAN1 will support only the collaboration-based solutions if they outperform implementation-based ML solutions and/or non-ML baselines.

·        Proposal 5: RAN1 defines and maintains possible collaboration options and uses them to map the collaboration in the use-cases under study.

·        Proposal 6: RAN1 to adopt a high level description of the ML-based solutions using a defined set of processing blocks, including at least the description of their input and outputs data, type of algorithm, hyperparameters, and control mechanisms used.

·        Proposal 7: The RAN1 complexity comparison is to be performed between the different ML-enabled solutions proposed for the same function (sub-use case).

·        Proposal 8: The RAN1 complexity estimation of an ML-enabled function should include the analysis of both training and inference operating modes.

·        Proposal 9: RAN1 to consider including at least the following items in the complexity analysis of ML-enabled solutions:

o   Training or (initial training/exploration for RL)

§  Number of floating-point operations required for one iteration (forward-backward) of the ML-algorithm

§  Number of required training iterations (steps and epochs) to reach the training performance/accuracy

§  Alternatively, to a) and b), the floating-point operations per second needed to run the training

§  Memory footprint of the ML algorithm (Mbit)

§  Memory footprint of the potentially required input and output data storage (Gbit)

§  Number of floating-point operations required to prepare (and format, convert) the input data in case these are not direct measurements or estimates readily available in the radio entity executing the ML-enabled function

§  Estimated number and payload (bytes) of additional signalling messages required to convey the ML-input and ML-output information between the involved radio entities (gNB and UE)

·        This might be complemented by the estimated required ML-input and ML-output data rates, i.e., factoring in the acceptable transmission delays

o   Inference (or exploration/exploitation for RL)

§  Number of floating point operations required for one forward pass of the ML-algorithm

§  Alternatively, to a), the floating-point operations per second needed to run the ML algorithm for (X) seconds

§  Number of floating-point operations required to prepare (and format, convert) the input data in case these are not direct measurements or estimates readily available in the radio entity executing the ML-enabled function

§  Estimated number and payload (bytes) of additional signalling messages required to convey the ML-input and ML-output information between the involved radio entities (gNB, UE)

·        This might be complemented by the estimated required ML-input and ML-output data rates, i.e., factoring in the acceptable transmission delays

·        Proposal 10: RAN1 to use simulator data for the study, after sufficient progress and the convergence on the solutions, evaluation with field data can be discussed.

Decision: The document is noted.

 

R1-2205023        General aspects of AIML framework        Qualcomm Incorporated

·        Proposal 1: The following terms should be adopted and defined accordingly. - Data collection - AI/ML Model - AI/ML Training - AI/ML Inference

·        Proposal 2: The following terms should be adopted and defined accordingly. - On-device Model - On-network Model - Cross-node (X-node) Model - On-device Training - Online Training

·        Proposal 3: Rel-18 study should take into account offline-engineering nature of On-device Model developments, so that concrete specification recommendations could be derived toward Rel-19 WI.

·        Proposal 4: Consider registered On-device Models and unregistered On-device Models as two On-device Model categories for Rel-18 study and discussion.

·        Proposal 5: For both Registered and Unregistered On-device Models, the model can remain proprietary, and its structure and parameters need not be revealed for the purpose of model activation, switching, deactivation, and performance monitoring.

·        Proposal 6: For X-node ML models, the model can remain proprietary, and its structure and parameters need not be revealed for the purpose of model activation, switching, deactivation, and performance monitoring. It is up to the arrangement between the party (parties) that were involved in designing the model, whether the UE-side model (the gNB-side model) should be known at the gNB vendor (the UE vendor).

·        Proposal 7: Study the following aspects for general specification frameworks for On-device Models - Training data assistance - Assistance information for training and inference - Model  activation, switching, and deactivation - Model performance monitoring and related signaling support - UE capability - X-node inference operation (for X-node models)

·        Proposal 8: For On-device Models, focus on offline model development and training in the Rel-18 SI, where models are designed and trained outside 3gpp. The Rel-18 SI may still scope out, if sufficient benefits are identified, network-controlled on-device model generation to give guidance for future study, with the understanding that such scoping may be highly speculative and unlikely to be realizable within the 5G-Advance timeframe.

·        Proposal 9: For network-side AI/ML models, study scenarios where UE may be aware of AI/ML models running at the network, and study model monitoring procedure as applicable. Study related specification impacts.

·        Proposal 10: Study meta-data assistance signaling for UE’s training data collection for On-device Model development. Here, meta-data refers to auxiliary information about data. An example meta-data for CSI-RS is its beam configuration ID.

·        Proposal 11: Study (noisy) ground truth assistance signaling for UE’s training data collection of On-device Models

·        Proposal 12: Study assistance information signaling to UE for On-device Model training and inference.

·        Proposal 13: For performance monitoring of On-device Models, study the following aspects: - Dedicated RS for the purpose of performance monitoring - Feedback needed for performance monitoring - Indication of performance monitoring result to UE or UE vendor (3gpp or outside 3gpp)

·        Proposal 14: For performance monitoring of network-side models, study the following aspects for general specification frameworks - Dedicated RS for the purpose of performance monitoring - Feedback needed for performance monitoring (in case the performance monitoring is done at gNB) - Reporting of performance monitoring result to gNB (in case the performance monitoring is done at UE)

·        Proposal 15: Consider the role of model performance monitoring in relation to RAN4 tests.

·        Proposal 16: Rel-18 RAN1 study dataset principles: - Strive to use 3gpp channel models from TR 38.901 for the Rel-18 evaluation study. - Careful consideration on spatial consistency in use cases such as positioning - Agree on evaluation methodology rather than on dataset - Companies may voluntarily share dataset, either synthetic or real-world dataset - Companies are encouraged to share sufficient details on the evaluation assumptions, statistics, and/or experiment setups for the dataset, as otherwise evaluation results based on the dataset may be hard to assess and questionable to be accepted for the study.

·        Proposal 17: Rel-18 RAN1 study AI/ML model principles: AI/ML models remain proprietary and are not specified in 3gpp. For the 3gpp study, - Companies are encouraged to share description of their AI/ML model and training procedure. - Companies may voluntarily share their AI/ML models.

Decision: The document is noted.

 

R1-2204416        General aspects of AI/ML framework       Lenovo

·        Proposal 1: A general framework for this study on AI/ML for NR air interface enhancement is needed to align the understanding on the relevant functions for future investigation.

·        Proposal 2: Define and construct different data sets for different purposes, such as for model training and for model validation.

·        Proposal 3: Using Option 1a or 1b, i.e., simulation data based, to construct the data set at least for model training, and the data set construction for other purposes needs to be further discussion.

·        Proposal 4: The acquisition on ground-truth data for supervised learning needs to be workable in practice for any proposed AI/ML approach.

·        Proposal 5: Define three categories of gNB-UE collaboration levels as listed in Table 1, according to the interacted AI/ML operation-related information.

·        Proposal 6: Adopt the AI Model Characterization Card (MCC) of an AI/ML model in Table 2 as a starting point for further discussion and refinement.

·        Proposal 7: Consider the KPIs/Metrics (if applicable) in Table 4 as a starting point for the common aspects of an evaluation methodology of a proposed AI/ML model for any of the agreed use cases.

Decision: The document is noted.

 

R1-2203067         Discussion on common AI/ML characteristics and operations  FUTUREWEI

R1-2203139         Discussion on general aspects of AI/ML framework   Huawei, HiSilicon

R1-2203247         Discussion on common AI/ML framework   ZTE

R1-2203404         Discussions on AI-ML framework  New H3C Technologies Co., Ltd.

R1-2203450         Discussion on AI/ML framework for air interface       CATT

R1-2203549         General discussions on AI/ML framework    vivo

R1-2203656         Discussion on general aspects of AI/ML for NR air interface   China Telecom

R1-2203690         Discussion on general aspects of AI ML framework   NEC

R1-2203728         Consideration on common AI/ML framework             Sony

R1-2203807         Initial views on the general aspects of AI/ML framework         xiaomi

R1-2203896         General aspects of AI ML framework and evaluation methodogy           Samsung

R1-2204014         On general aspects of AI/ML framework      OPPO

R1-2204062         Evaluating general aspects of AI-ML framework        Charter Communications, Inc

R1-2204077         General aspects of AI/ML framework           Panasonic

R1-2204120         Considerations on AI/ML framework            SHARP Corporation

R1-2204148         General aspects on AI/ML framework           LG Electronics

R1-2204179         Views on general aspects on AI-ML framework         CAICT

R1-2204237         Discussion on general aspect of AI/ML framework    Apple

R1-2204294         Discussion on general aspects of AI/ML framework   CMCC

R1-2204374         Discussion on general aspects of AI/ML framework   NTT DOCOMO, INC.

R1-2204498         Discussion on general aspects of AIML framework    Spreadtrum Communications

R1-2204650         Discussion on AI/ML framework for NR air interface ETRI

R1-2204792         Discussion of AI/ML framework    Intel Corporation

R1-2204839         On general aspects of AI and ML framework for NR air interface          NVIDIA

R1-2204859         General aspects of AI/ML framework for NR air interface       AT&T

R1-2204936         General aspects of AI/ML framework           Mavenir

R1-2205065         AI/ML Model Life cycle management          Rakuten Mobile

R1-2205075         Discussions on general aspects of AI/ML framework Fujitsu Limited

R1-2205099         Overview to support artificial intelligence over air interface     MediaTek Inc.

 

[109-e-R18-AI/ML-02] – Taesang (Qualcomm)

Email discussion on general aspects of AI/ML by May 20

-        Check points: May 18

R1-2205285        Summary#1 of [109-e-R18-AI/ML-02]       Moderator (Qualcomm)

From May 13th GTW session

Agreement

·        Use 3gpp channel models (TR 38.901) as the baseline for evaluations.

·        Note: Companies may submit additional results based on other dataset than generated by 3GPP channel models

 

R1-2205401        Summary#2 of [109-e-R18-AI/ML-02]       Moderator (Qualcomm)

From May 17th GTW session

Working Assumption

Include the following into a working list of terminologies to be used for RAN1 AI/ML air interface SI discussion.

The description of the terminologies may be further refined as the study progresses.

New terminologies may be added as the study progresses.

It is FFS which subset of terminologies to capture into the TR.

 

Terminology

Description

Data collection

A process of collecting data by the network nodes, management entity, or UE for the purpose of AI/ML model training, data analytics and inference

AI/ML Model

A data driven algorithm that applies AI/ML techniques to generate a set of outputs based on a set of inputs.

AI/ML model training

A process to train an AI/ML Model [by learning the input/output relationship] in a data driven manner and obtain the trained AI/ML Model for inference

AI/ML model Inference

A process of using a trained AI/ML model to produce a set of outputs based on a set of inputs

AI/ML model validation

A subprocess of training, to evaluate the quality of an AI/ML model using a dataset different from one used for model training, that helps selecting model parameters that generalize beyond the dataset used for model training.

AI/ML model testing

A subprocess of training, to evaluate the performance of a final AI/ML model using a dataset different from one used for model training and validation. Differently from AI/ML model validation, testing does not assume subsequent tuning of the model.

UE-side (AI/ML) model

An AI/ML Model whose inference is performed entirely at the UE

Network-side (AI/ML) model

An AI/ML Model whose inference is performed entirely at the network

One-sided (AI/ML) model

A UE-side (AI/ML) model or a Network-side (AI/ML) model

Two-sided (AI/ML) model

A paired AI/ML Model(s) over which joint inference is performed, where joint inference comprises AI/ML Inference whose inference is performed jointly across the UE and the network, i.e, the first part of inference is firstly performed by UE and then the remaining part is performed by gNB, or vice versa.

AI/ML model transfer

Delivery of an AI/ML model over the air interface, either parameters of a model structure known at the receiving end or a new model with parameters. Delivery may contain a full model or a partial model.

Model download

Model transfer from the network to UE

Model upload

Model transfer from UE to the network

Federated learning / federated training

A machine learning technique that trains an AI/ML model across multiple decentralized edge nodes (e.g., UEs, gNBs) each performing local model training using local data samples. The technique requires multiple interactions of the model, but no exchange of local data samples.

Offline field data

The data collected from field and used for offline training of the AI/ML model

Online field data

The data collected from field and used for online training of the AI/ML model

Model monitoring

A procedure that monitors the inference performance of the AI/ML model

Supervised learning

A process of training a model from input and its corresponding labels.

Unsupervised learning

A process of training a model without labelled data.

Semi-supervised learning 

A process of training a model with a mix of labelled data and unlabelled data

Reinforcement Learning (RL)

A process of training an AI/ML model from input (a.k.a. state) and a feedback signal (a.k.a.  reward) resulting from the model’s output (a.k.a. action) in an environment the model is interacting with.

Model activation

enable an AI/ML model for a specific function

Model deactivation

disable an AI/ML model for a specific function

Model switching

Deactivating a currently active AI/ML model and activating a different AI/ML model for a specific function

 

Conclusion

As indicated in SID, although specific AI/ML algorithms and models may be studied for evaluation purposes, AI/ML algorithms and models are implementation specific and are not expected to be specified.

 

Observation

Where AI/ML functionality resides depends on specific use cases and sub-use cases.

 

Conclusion

·        RAN1 discussion should focus on network-UE interaction.

o   AI/ML functionality mapping within the network (such as gNB, LMF, or OAM) is up to RAN2/3 discussion.

 

R1-2205474        Summary#3 of [109-e-R18-AI/ML-02]       Moderator (Qualcomm)

 

R1-2205522        Summary#4 of [109-e-R18-AI/ML-02]       Moderator (Qualcomm)

From May 20th GTW session

Agreement

Take the following network-UE collaboration levels as one aspect for defining collaboration levels

1.            Level x: No collaboration

2.            Level y: Signaling-based collaboration without model transfer

3.            Level z: Signaling-based collaboration with model transfer

Note: Other aspect(s), for defining collaboration levels is not precluded and will be discussed in later meetings, e.g., with/without model updating, to support training/inference, for defining collaboration levels will be discussed in later meetings

FFS: Clarification is needed for Level x-y boundary

 

Note: Extended email discussion focusing on evaluation assumptions to take place

·        Dates: May 23 – 24

9.2.2        AI/ML for CSI feedback enhancement

9.2.2.1       Evaluation on AI/ML for CSI feedback enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2203897        Evaluation on AI ML for CSI feedback enhancement          Samsung

·        Proposal 1-1: For CSI prediction, to model user mobility, consider the link-level channel model with Doppler information in Section 7.5 of TR 38.901.

·        Proposal 1-2: For CSI prediction, consider Rel-16 CSI feedback and Rel-17 CSI feedback, as benchmark schemes.

·        Proposal 1-3: For CSI predictions, reuse channel models in TR 38.901 to generate datasets for training/testing/validation in this sub-use case.

·        Proposal 1-4: For KPIs in CSI prediction, proxy metrics such as NMSE and cosine similarity can be considered as intermediated KPIs and   system-level metrics such as UPT can be used for general KPIs.

·        Proposal 1-5: For CSI prediction, consider capability-related KPIs such as computational complexity, power consumption, memory storage, and hardware requirements.

·        Proposal 2-1: Consider an auto-encoder as a baseline AI/ML model for CSI feedback compression and reconstruction tasks. Further study is needed to select the  baseline type of neural network (e.g. CNN, RNN, LSTM).

·        Proposal 2-2: For calibration in CSI compression, consider both performance-related KPIs (e.g., reconstruction accuracy) and capability-related KPIs (e.g., computational complexity) for the baseline AI/ML model.

·        Proposal 2-3: Only for the model calibration in CSI compression, aligned loss function, hyper-parameter values, and details of the AI model are considered together.

·        Proposal 2-4: For CSI compression, consider intermediate performance metrics (e.g., NMSE, CS) and UPT as final metric.

·        Proposal 2-5: Consider various aspects of AI/ML models including computational complexity and the model size to study the AI processing burden and requirement at the UE.

·        Proposal 2-6: To evaluate the capability of model generalization concerning various channel parameters (e.g., Rician K factor, path loss, angles, delays, powers, etc.)), consider datasets from mixed scenarios or different distributions of channel parameters in a single scenario.

·        Proposal 3-1.: Consider a two-phased approach for evaluation. Phase I to compare various AI/ML models and their gain for representative sub-use case selection and Phase II to evaluate the gain of AI/ML schemes as compared to conventional benchmark schemes in communication systems. 

·        Proposal 3-2: Strive to reuse the evaluation assumptions of Rel. 16/17 codebook enhancement as much as possible with additional mobility modeling. FFS: mobility modeling, and other additional considerations to model time-correlated CSI.

·        Proposal 3-3: Target moderate UE mobility, e.g., up to 30kmphr for joint CSI prediction and compression.

·        Proposal 3-4: Consider either Rel-16 or Rel-17 CBs as a benchmark conventional scheme for performance comparison purposes. The selection of a benchmark conventional scheme could be based on whether angle-delay reciprocity is exploited in the channel measurement.

·        Proposal 3-5: Consider an autoencoder-based AI/ML solution for joint CSI compression and prediction. 

·        Proposal 3-6: Consider simpler performance metrics, e.g., NMSE, cosine similarity, for Phase I of evaluation. Traditional performance metrics employed for codebook performance evaluation, such as UPT vs. feedback overhead, can be considered for Phase II.

·        Proposal 3-7: Consider UE capability-related KPIs for AI/ML-based CSI compression and prediction, including computational complexity, memory storage, inference latency, model/training data transfer overhead, if applicable.

Decision: The document is noted.

 

R1-2203550        Evaluation on AI/ML for CSI feedback enhancement          vivo

Proposal 1:       The dataset for AI-model training, validation and testing can be constructed mainly based on the channel model(s) defined in TR 38.901, namely, UMi, UMa, and Indoor scenarios in system level simulation, and optionally on CDL in link level simulation.

Proposal 2:        Consider both cases with same or different input data dimensions for data set construction to verify generalization performance.

Proposal 3:        For CSI enhancement, the data set should be constructed in a way that data samples across different UEs, different cells, different drops, different scenarios are all included.

Proposal 4:        Both the following two cases should be considered for generalization performance verification

a)        Case1: the training data set is constructed by mixing data from different setup

b)        Case2: training set and testing data set are from different setups

Proposal 5:        For the case that the training data set is constructed by mixing data from different setup, dataset for generalization can be constructed based on the combination of different scenarios and configurations. Different ratio of data mixture can be evaluated with the same total sample number for each dataset.

Proposal 6:        For AI model calibration, the parameters used to construct dataset needs to be aligned.

Proposal 7:        Companies are encouraged to share the data set and model files in a public accessible way for cross check purposes. Our initial data set file for CSI compression and CSI prediction is on the following link [5] and [6].

Proposal 8:        Ideal downlink channel estimation is assumed as the starting point for the performance evaluation.

Proposal 9:        Use ideal UCI feedback for the performance evaluation.

Proposal 10:     The evaluation assumption in Table 2 is used as the SLS assumptions for both non-AI and AI-based performance evaluations.

Proposal 11:     Parameter perturbation based on the basic parameter in Table 2 can be conducted to verify generalization performance of each case.

Proposal 12:     The evaluation assumption in Table 3 is used as the LLS assumptions for AI-based CSI prediction evaluations.

Proposal 13:     Study the performance loss caused by the n-bits quantization of AI model parameters with the float number AI model parameters as baseline.

Proposal 14:     Clarify the quantification level of the AI model for evaluation.

Proposal 15:     Spectral efficiency [bits/s/Hz] can be used for the final evaluation metric while absolute or square of cosine similarity and NMSE can be used to measure the similarity and difference between input and output as an intermediate metric.

Proposal 16:     Generalization performance is also used as one KPI to verify whether AI/ML can work across multiple setups.

Proposal 17:     The complexity, parameter sizes, quantization, latencies and power consumption of models needs to be considered.

Proposal 18:     The impact of the type of historical CSI inputs should be studied for the AI-based CSI prediction.

Proposal 19:     The choice of number of historical CSI inputs should be studied for the AI-based CSI prediction.

Proposal 20:     The study on the prediction of multiple future CSIs is with high priority.

Proposal 21:     The generalization performance across frequency domain should be studied.

Proposal 22:     The generalization capability with respect to scenarios should be studied.

Proposal 23:     Finetuning of AI-based CSI prediction should be studied.

Decision: The document is noted.

 

R1-2203650         Evaluation on AI-based CSI feedback           SEU

R1-2204041         Considerations on AI-enabled CSI overhead reduction              CENC

R1-2204606         Discussion on the AI/ML methods for CSI feedback enhancements       Fraunhofer IIS, Fraunhofer HHI

R1-2203068         Discussion on evaluation of AI/ML for CSI feedback enhancement use case               FUTUREWEI

R1-2203140         Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2203248         Evaluation assumptions on AI/ML for CSI feedback  ZTE

R1-2203281         Evaluations on AI-CSI       Ericsson

R1-2203451         Discussion on evaluation on AI/ML for CSI feedback CATT

R1-2203808         Discussion on evaluation on AI/ML for CSI feedback enhancement       xiaomi

R1-2204015         Evaluation methodology and preliminary results on AI/ML for CSI feedback enhancement       OPPO

R1-2204050         Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2204055         Evaluation of CSI compression with AI/ML Beijing Jiaotong University

R1-2204063         Performance evaluation of ML techniques for CSI feedback enhancement               Charter Communications, Inc

R1-2204149         Evaluation on AI/ML for CSI feedback enhancement LG Electronics

R1-2204180         Some discussions on evaluation on AI-ML for CSI feedback   CAICT

R1-2204238         Initial evaluation on AI/ML for CSI feedback             Apple

R1-2204295         Discussion on evaluation on AI/ML for CSI feedback enhancement       CMCC

R1-2204375         Discussion on evaluation on AI/ML for CSI feedback enhancement       NTT DOCOMO, INC.

R1-2204417         Evaluation on AI/ML for CSI feedback         Lenovo

R1-2204499         Discussion on evaluation on AI/ML for CSI feedback enhancement       Spreadtrum Communications, BUPT

R1-2204571         Evaluation on ML for CSI feedback enhancement      Nokia, Nokia Shanghai Bell

R1-2204793         Evaluation for CSI feedback enhancements  Intel Corporation

R1-2204840         On evaluation assumptions of AI and ML for CSI feedback enhancement               NVIDIA

R1-2204860         Evaluation of AI/ML for CSI feedback enhancements AT&T

R1-2205024         Evaluation on AIML for CSI feedback enhancement  Qualcomm Incorporated

R1-2205076         Evaluation on AI/ML for CSI feedback enhancement Fujitsu Limited

R1-2205100         Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.

 

[109-e-R18-AI/ML-03] – Yuan (Huawei)

Email discussion on evaluation of AI/ML for CSI feedback enhancement by May 20

-        Check points: May 18

R1-2205222         Summary#1 of [109-e-R18-AI/ML-03]         Moderator (Huawei)

R1-2205223        Summary#2 of [109-e-R18-AI/ML-03]       Moderator (Huawei)

From May 13th GTW session

Agreement

For the performance evaluation of the AI/ML based CSI feedback enhancement, system level simulation approach is adopted as baseline

·        Link level simulation is optionally adopted

 

R1-2205224        Summary#3 of [109-e-R18-AI/ML-03]       Moderator (Huawei)

Decision: As per email decision posted on May 19th,

Agreement

For the evaluation of the AI/ML based CSI feedback enhancement, for the calibration purpose on the dataset and/or AI/ML model over companies, consider to align the parameters (e.g., for scenarios/channels) for generating the dataset in the simulation as a starting point.

 

 

Decision: As per email decision posted on May 20th,

Agreement 

For the evaluation of the AI/ML based CSI feedback enhancement, for ‘Channel estimation’, ideal DL channel estimation is optionally taken into the baseline of EVM for the purpose of calibration and/or comparing intermediate results (e.g., accuracy of AI/ML output CSI, etc.)

·        Note: Eventual performance comparison with the benchmark release and drawing SI conclusions should be based on realistic DL channel estimation.

·        FFS: the ideal channel estimation is applied for dataset construction, or performance evaluation/inference.

·        FFS: How to model the realistic channel estimation

·        FFS: Whether ideal channel is used as target CSI for intermediate results calculation with AI/ML output CSI from realistic channel estimation

Agreement 

For the evaluation of the AI/ML based CSI feedback enhancement, companies can consider performing intermediate evaluation on AI/ML model performance to derive the intermediate KPI(s) (e.g., accuracy of AI/ML output CSI) for the purpose of AI/ML solution comparison.

 

Agreement 

For the evaluation of the AI/ML based CSI feedback enhancement, Floating point operations (FLOPs) is adopted as part of the ‘Evaluation Metric’, and reported by companies.

 

Agreement 

For the evaluation of the AI/ML based CSI feedback enhancement, AI/ML memory storage in terms of AI/ML model size and number of AI/ML parameters is adopted as part of the ‘Evaluation Metric’, and reported by companies who may select either or both.

·        FFS: the format of the AI/ML parameters

Agreement

For the evaluation of the AI/ML based CSI compression sub use cases, a two-sided model is considered as a starting point, including an AI/ML-based CSI generation part to generate the CSI feedback information and an AI/ML-based CSI reconstruction part which is used to reconstruct the CSI from the received CSI feedback information.

·        At least for inference, the CSI generation part is located at the UE side, and the CSI reconstruction part is located at the gNB side.

Agreement

For the evaluation of the AI/ML based CSI feedback enhancement, if SLS is adopted, the following table is taken as a baseline of EVM

·        Note: the following table captures the common parts of the R16 CSI enhancement EVM table and the R17 CSI enhancement EVM table, while the different parts are FFS.

·        Note: the baseline EVM is used to compare the performance with the benchmark release, while the AI/ML related parameters (e.g., dataset construction, generalization verification, and AI/ML related metrics) can be of additional/different assumptions.

o   The conclusions for the use cases in the SI should be drawn based on generalization verification over potentially multiple scenarios/configurations.

·        FFS: modifications on top of the following table for the purpose of AI/ML related evaluations.

Parameter

Value

Duplex, Waveform

FDD (TDD is not precluded), OFDM

Multiple access

OFDMA

Scenario

Dense Urban (Macro only) is a baseline.

Other scenarios (e.g. UMi@4GHz 2GHz, Urban Macro) are not precluded.

Frequency Range

FR1 only, FFS 2GHz or 4GHz as a baseline

Inter-BS distance

200m

Channel model        

According to TR 38.901

Antenna setup and port layouts at gNB

Companies need to report which option(s) are used between

-          32 ports: (8,8,2,1,1,2,8), (dH,dV) = (0.5, 0.8)λ

-          16 ports: (8,4,2,1,1,2,4), (dH,dV) = (0.5, 0.8)λ

Other configurations are not precluded.

Antenna setup and port layouts at UE

4RX: (1,2,2,1,1,1,2), (dH,dV) = (0.5, 0.5)λ for (rank 1-4)

2RX: (1,1,2,1,1,1,1), (dH,dV) = (0.5, 0.5)λ for (rank 1,2)

Other configuration is not precluded.

BS Tx power

41 dBm for 10MHz, 44dBm for 20MHz, 47dBm for 40MHz

BS antenna height

25m

UE antenna height & gain

Follow TR36.873

UE receiver noise figure

9dB

Modulation

Up to 256QAM

Coding on PDSCH

LDPC

Max code-block size=8448bit

Numerology

Slot/non-slot

14 OFDM symbol slot

SCS

15kHz for 2GHz, 30kHz for 4GHz

Simulation bandwidth

FFS

Frame structure

Slot Format 0 (all downlink) for all slots

MIMO scheme

FFS

MIMO layers

For all evaluation, companies to provide the assumption on the maximum MU layers (e.g. 8 or 12)

CSI feedback

Feedback assumption at least for baseline scheme

  • CSI feedback periodicity (full CSI feedback) :  5 ms,
  • Scheduling delay (from CSI feedback to time to apply in scheduling) :  4 ms

Overhead

Companies shall provide the downlink overhead assumption (i.e., whether the CSI-RS transmission is UE-specific or not and take that into account for overhead computation)

Traffic model

FFS

Traffic load (Resource utilization)

FFS

UE distribution

- 80% indoor (3km/h), 20% outdoor (30km/h)

FFS whether/what other indoor/outdoor distribution and/or UE speeds for outdoor UEs needed

UE receiver

MMSE-IRC as the baseline receiver

Feedback assumption

Realistic

Channel estimation         

Realistic as a baseline

FFS ideal channel estimation

Evaluation Metric

Throughput and CSI feedback overhead as baseline metrics.

Additional metrics, e.g., ratio between throughput and CSI feedback overhead, can be used.

Maximum overhead (payload size for CSI feedback)for each rank at one feedback instance is the baseline metric for CSI feedback overhead, and companies can provide other metrics.

Baseline for performance evaluation

FFS

 

 

R1-2205491        Summary#4 of [109-e-R18-AI/ML-03]       Moderator (Huawei)

Decision: As per email decision posted on May 22nd,

Agreement

For the evaluation of the AI/ML based CSI feedback enhancement, as a starting point, take the intermediate KPIs of GCS/SGCS and/or NMSE as part of the ‘Evaluation Metric’ to evaluate the accuracy of the AI/ML output CSI

·        For GCS/SGCS,

o   FFS: how to calculate GCS/SGCS for rank>1

o   FFS: whether GCS or SGCS is adopted

·        FFS other metrics, e.g., equivalent MSE, received SNR, or numerical spectral efficiency gap.

Agreement

For the evaluation of the AI/ML based CSI feedback enhancement, if LLS is preferred, the following table is taken as a baseline of EVM

·        Note: the baseline EVM is used to compare the performance with the benchmark release, while the AI/ML related parameters (e.g., dataset construction, generalization verification, and AI/ML related metrics) can be of additional/different assumptions.

o   The conclusions for the use cases in the SI should be drawn based on generalization verification over potentially multiple scenarios/configurations.

·        FFS: modifications on top of the following table for the purpose of AI/ML related evaluations.

·        FFS: other parameters and values if needed

Parameter

Value

Duplex, Waveform

FDD (TDD is not precluded), OFDM

Carrier frequency

2GHz as baseline, optional for 4GHz

Bandwidth

10MHz or 20MHz

Subcarrier spacing

15kHz for 2GHz, 30kHz for 4GHz

Nt

32: (8,8,2,1,1,2,8), (dH,dV) = (0.5, 0.8)λ

Nr

4: (1,2,2,1,1,1,2), (dH,dV) = (0.5, 0.5)λ

Channel model

CDL-C as baseline, CDL-A as optional

UE speed

3kmhr, 10km/h, 20km/h or 30km/h to be reported by companies

Delay spread

30ns or 300ns

Channel estimation

Realistic channel estimation algorithms (e.g. LS or MMSE) as a baseline, FFS ideal channel estimation

Rank per UE

Rank 1-4. Companies are encouraged to report the Rank number, and whether/how rank adaptation is applied

 

Agreement (modified by May 23rd post)

For the evaluation of the AI/ML based CSI feedback enhancement, study the verification of generalization. Companies are encouraged to report how they verify the generalization of the AI/ML model, including:

·        The training dataset of configuration(s)/ scenario(s), including potentially the mixed training dataset from multiple configurations/scenarios

·        The configuration(s)/ scenario(s) for testing/inference

·        The detailed list of configuration(s) and/or scenario(s)

·        Other details are not precluded

Note: Above agreement is updated as follows

Agreement

For the evaluation of the AI/ML based CSI feedback enhancement, study the verification of generalization. Companies are encouraged to report how they verify the generalization of the AI/ML model, including:

·        The configuration(s)/ scenario(s) for training dataset, including potentially the mixed training dataset from multiple configurations/scenarios

·        The configuration(s)/ scenario(s) for testing/inference

·        Other details are not precluded

 

Agreement

For the evaluation of the AI/ML based CSI compression sub use cases, companies are encouraged to report the details of their models, including:

·        The structure of the AI/ML model, e.g., type (CNN, RNN, Transformer, Inception, …), the number of layers, branches, real valued or complex valued parameters, etc.

·        The input CSI type, e.g., raw channel matrix estimated by UE, eigenvector(s) of the raw channel matrix estimated by UE, etc.

o   FFS: the input CSI is obtained from the channel with or without analog BF

·        The output CSI type, e.g., channel matrix, eigenvector(s), etc.

·        Data pre-processing/post-processing

·        Loss function

·        Others are not precluded

Agreement

For the evaluation of the AI/ML based CSI feedback enhancement, if SLS is adopted, the following parameters are taken into the baseline of EVM

·        Note: The 2nd column applies if R16 TypeII codebook is selected as baseline, and the 3rd column applies if R17 TypeII codebook is selected as baseline.

o   Additional assumptions from R17 TypeII EVM Same consideration with respect to utilizing angle-delay reciprocity should be considered taken for the AI/ML based CSI feedback and the baseline scheme if R17 TypeII codebook is selected as baseline

o   FFS baseline for potential sub use cases involving CSI enhancement on time domain

·        Note: the baseline EVM is used to compare the performance with the benchmark release, while the AI/ML related parameters (e.g., dataset construction, generalization verification, and AI/ML related metrics) can be of additional/different assumptions.

o   The conclusions for the use cases in the SI should be drawn based on generalization verification over potentially multiple scenarios/configurations.

·        FFS: modifications on top of the following table for the purpose of AI/ML related evaluations.

Parameter

Value (if R16 as baseline)

Value (if R17 as baseline)

Frequency Range

FR1 only, 2GHz as baseline, optional for 4GHz.

FR1 only, 2GHz with duplexing gap of 200MHz between DL and UL, optional for 4GHz

Simulation bandwidth

10 MHz for 15kHz as a baseline, and configurations which emulate larger BW, e.g., same sub-band size as 40/100 MHz with 30kHz, may be optionally considered. Above 15kHz is replaced with 30kHz SCS for 4GHz.

20 MHz for 15kHz as a baseline (optional for 10 MHz with 15KHz), and configurations which emulate larger BW, e.g., same sub-band size as 40/100 MHz with 30kHz, may be optionally considered. Above 15kHz is replaced with 30kHz SCS for 4GHz

MIMO scheme

SU/MU-MIMO with rank adaptation.

Companies are encouraged to report the SU/MU-MIMO with RU

SU/MU-MIMO with rank adaptation. Companies are encouraged to report the SU/MU-MIMO with RU

Traffic load (Resource utilization)

20/50/70%

Companies are encouraged to report the MU-MIMO utilization.

20/50/70%

Companies are encouraged to report the MU-MIMO utilization.

 

 

Decision: As per email decision posted on May 25th,

Agreement 

For the evaluation of the AI/ML based CSI feedback enhancement, if SLS is adopted, the ‘Baseline for performance evaluation’ in the baseline of EVM is captured as follows

Baseline for performance evaluation

Companies need to report which option is used between

- Rel-16 TypeII Codebook as the baseline for performance and overhead evaluation.

- Rel-17 TypeII Codebook as the baseline for performance and overhead evaluation.

- FFS: Whether Type I Codebook can be optionally considered at least for performance evaluation

 

Agreement

For the evaluation of the AI/ML based CSI feedback enhancement, if the GCS/SGCS is adopted as the intermediate KPI as part of the ‘Evaluation Metric’ for rank>1 cases, companies to report the GCS/SGCS calculation/extension methods, including:

·        Method 1: Average over all layers

o   Note:  is the eigenvector of the target CSI at resource unit i and K is the rank. is the  output vector of the output CSI of resource unit i.  is the total number of resource units.  denotes the average operation over multiple samples.

·        Method 2: Weighted average over all layers

o   Note: Companies to report the formula (e.g., whether normalization is applied for eigenvalues)

·        Method 3: GCS/SGCS is separately calculated for each layer (e.g., for K layers, K GCS/SGCS values are derived respectively, and comparison is performed per layer)

·        Other methods are not precluded

·        FFS: Further down-selection among the above options or take one/a subset of the above methods as baseline(s).

 

Final summary in R1-2205492.

9.2.2.2       Other aspects on AI/ML for CSI feedback enhancement

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2203069         Discussion on sub use cases of AI/ML for CSI feedback enhancement use case               FUTUREWEI

R1-2203141         Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2203249         Discussion on potential enhancements for AI/ML based CSI feedback   ZTE

R1-2203282         Discussions on AI-CSI      Ericsson

R1-2203452         Discussion on other aspects on AI/ML for CSI feedback           CATT

R1-2203551         Other aspects on AI/ML for CSI feedback enhancement           vivo

R1-2203614         Discussion on AI/ML for CSI feedback enhancement GDCNI  (Late submission)

R1-2203729         Considerations on CSI measurement enhancements via AI/ML Sony

R1-2203809         Discussion on AI for CSI feedback enhancement        xiaomi

R1-2203898         Representative sub use cases for CSI feedback enhancement    Samsung

R1-2203939         Discussion on AI/ML for CSI feedback enhancement NEC

R1-2204016         On sub use cases and other aspects of AI/ML for CSI feedback enhancement               OPPO

R1-2204051         Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2204057         CSI compression with AI/ML          Beijing Jiaotong University

R1-2204150         Other aspects on AI/ML for CSI feedback enhancement           LG Electronics

R1-2204181         Discussions on AI-ML for CSI feedback       CAICT

R1-2204239         Discussion on other aspects on AI/ML for CSI feedback           Apple

R1-2204296         Discussion on other aspects on AI/ML for CSI feedback enhancement  CMCC

R1-2204376         Discussion on other aspects on AI/ML for CSI feedback enhancement  NTT DOCOMO, INC.

R1-2204418         Further aspects of AI/ML for CSI feedback  Lenovo

R1-2204500         Discussion on other aspects on AI/ML for CSI feedback           Spreadtrum Communications

R1-2204568         Discussions on Sub-Use Cases in AI/ML for CSI Feedback Enhancement            TCL Communication

R1-2204572         Other aspects on ML for CSI feedback enhancement  Nokia, Nokia Shanghai Bell

R1-2204659         Discussion on AI/ML for CSI feedback enhancement Panasonic

R1-2204794         Use-cases and specification for CSI feedback              Intel Corporation

R1-2204841         On other aspects of AI and ML for CSI feedback enhancement NVIDIA

R1-2204861         CSI feedback enhancements for AI/ML based MU-MIMO scheduling and parameter configuration       AT&T

R1-2204937         AI/ML for CSI feedback enhancement          Mavenir

R1-2205025         Other aspects on AIML for CSI feedback enhancement            Qualcomm Incorporated

R1-2205077         Views on sub-use case selection and STD impacts on AI/ML for CSI feedback enhancement       Fujitsu Limited

R1-2205101         On the challenges of collecting field data for training and testing of AI/ML for CSI feedback enhancement      MediaTek Inc.

 

[109-e-R18-AI/ML-04] – Huaning (Apple)

Email discussion on other aspects of AI/ML for CSI feedback enhancement by May 20

-        Check points: May 18

R1-2205467         Email discussion on other aspects of AI/ML for CSI enhancement         Moderator (Apple)

 

Decision: As per email decision posted on May 20th,

Agreement

Spatial-frequency domain CSI compression using two-sided AI model is selected as one representative sub use case. 

·        Note: Study of other sub use cases is not precluded.

·        Note: All pre-processing/post-processing, quantization/de-quantization are within the scope of the sub use case. 

Conclusion

·        Further discuss temporal-spatial-frequency domain CSI compression using two-sided model as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion.

·        Further discuss improving the CSI accuracy based on traditional codebook design using one-sided model as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion.

·        Further discuss CSI prediction using one-sided model as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion

·        Further discuss CSI-RS configuration and overhead reduction as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion

·        Further discuss resource allocation and scheduling as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion

·        Further discuss joint CSI prediction and compression as a possible sub-use case for CSI feedback enhancement after evaluation methodology discussion.

 

Final summary in R1-2205556.

9.2.3        AI/ML for beam management

9.2.3.1       Evaluation on AI/ML for beam management

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2204377        Discussion on evaluation on AI/ML for beam management NTT DOCOMO, INC.

·        Proposal 1: Time-domain beam prediction should be studied as a sub use-case of beam management in Rel-18 AI/ML for AI.

·        Proposal 2: 3GPP statistical channel models are considered in the evaluation for representative sub use-case selection.

·        Proposal 3: Discuss and decide whether and which deterministic channel models should be used to capture the final evaluation results of selected sub use-cases.

·        Proposal 4: Spatial-domain beam estimation should be studied as a sub use-case of beam management in Rel-18 AI/ML for AI.

Decision: The document is noted.

 

R1-2203250        Evaluation assumptions on AI/ML for beam management  ZTE

Proposal 1: Due to stronger computing power and comprehensive awareness of the surrounding environment, AI inference is performed on the gNB side to ensure high prediction accuracy and low processing delay.

Proposal 2: Top-K candidate beams with higher predicted RSRP can be filtered out for refined small-range beam sweeping, resulting in a relatively good trade-off between training overhead and performance.

Proposal 3: Deep neutral network is exploited for the spatial-domain beam prediction due to its excellent ability on classification tasks and learning complex nonlinear relationships.

Proposal 4: AI/ML based spatial-domain beam prediction can significantly reduce the beam training overhead by avoiding exhaustive beam sweeping.

Proposal 5: Beam prediction accuracy can be used as the performance indicators at the early stage, which may include top-1/top-K beam prediction accuracy, average RSRP difference, and CDFs of RSRP difference between the AI-predicted beam and ideal beam.

Proposal 6: Since the data sets and AI models used by different companies are different, it is necessary to provide common data sets and baseline models for simulation calibration and performance cross-validation.

Proposal 7: AI/ML based solutions are expected to be studied and evaluated to do beam prediction so as to reduce beam tracking latency and RS overhead in high mobility scenarios.

Proposal 8: Consider predictable mobility for beam management as an enhancement aspect for improving UE experience in FR2 high mobility scenario (e.g., high-speed train and high-way).

-         Study and evaluate the feasibility and potential system level gain on predictable mobility for beam management based on the identified scenario(s).

Decision: The document is noted.

 

R1-2203142         Evaluation on AI/ML for beam management Huawei, HiSilicon

R1-2203255         Model and data-driven beam predictions in high-speed railway scenarios               PML

R1-2203283         Evaluations on AI-BM      Ericsson

R1-2203374         Discussion for evaluation on AI/ML for beam management     InterDigital, Inc.

R1-2203453         Discussion on evaluation on AI/ML for beam management      CATT

R1-2203552         Evaluation on AI/ML for beam management vivo

R1-2203810         Evaluation on AI/ML for beam management xiaomi

R1-2203899         Evaluation on AI ML for Beam management              Samsung

R1-2204017         Evaluation methodology and preliminary results on AI/ML for beam management               OPPO

R1-2204059         Evaluation methodology of beam management with AI/ML     Beijing Jiaotong University

R1-2204102         Discussion on evaluation of AI/ML for beam management use case               FUTUREWEI

R1-2204151         Evaluation on AI/ML for beam management LG Electronics

R1-2204182         Some discussions on evaluation on AI-ML for Beam management         CAICT

R1-2204240         Evaluation on AI based Beam Management Apple

R1-2204297         Discussion on evaluation on AI/ML for beam management      CMCC

R1-2204419         Evaluation on AI/ML for beam management Lenovo

R1-2204573         Evaluation on ML for beam management     Nokia, Nokia Shanghai Bell

R1-2204795         Evaluation for beam management   Intel Corporation

R1-2204842         On evaluation assumptions of AI and ML for beam management           NVIDIA

R1-2204862         Evaluation methodology aspects on AI/ML for beam management         AT&T

R1-2205026         Evaluation on AIML for beam management Qualcomm Incorporated

R1-2205078         Evaluation on AI/ML for beam management Fujitsu Limited

R1-2205102         AI-assisted Target Cell Prediction for Inter-cell Beam Management       MediaTek Inc.

 

[109-e-R18-AI/ML-05] – Feifei (Samsung)

Email discussion on evaluation of AI/ML for beam management by May 20

-        Check points: May 18

R1-2205269        Feature lead summary #1 evaluation of AI/ML for beam management               Moderator (Samsung)

From May 17th GTW session

Agreement

·        For dataset construction and performance evaluation (if applicable) for the AI/ML in beam management, system level simulation approach is adopted as baseline

o   Link level simulation is optionally adopted

Agreement

·        At least for temporal beam prediction, companies report the one of spatial consistency procedures:

o   Procedure A in TR38.901

o   Procedure B in TR38.901

Agreement

·        At least for temporal beam prediction, Dense Urban (macro-layer only, TR 38.913) is the basic scenario for dataset generation and performance evaluation.

o   Other scenarios are not precluded.

·        For spatial-domain beam prediction, Dense Urban (macro-layer only, TR 38.913) is the basic scenario for dataset generation and performance evaluation.

o   Other scenarios are not precluded.

Agreement

·        At least for spatial-domain beam prediction in initial phase of the evaluation, UE trajectory model is not necessarily to be defined.

Agreement

·        At least for temporal beam prediction in initial phase of the evaluation, UE trajectory model is defined. FFS on the details.

 

R1-2205270         Feature lead summary #2 evaluation of AI/ML for beam management   Moderator (Samsung)

R1-2205271         Feature lead summary #3 evaluation of AI/ML for beam management   Moderator (Samsung)

 

Decision: As per email decision posted on May 20th,

Agreement

·        UE rotation speed is reported by companies.

o   Note: UE rotation speed = 0, i.e., no UE rotation, is not precluded.

Agreement

·        For AI/ML in beam management evaluation, RAN1 does not attempt to define any common AI/ML model as a baseline.

Conclusion

Further study AI/ML model generalization in beam management evaluating the inference performance of beam prediction under multiple different scenarios/configurations.

·        FFS on different scenarios/configurations

·        Companies report the training approach, at least including the dataset assumption for training

Agreement

·        For evaluation of AI/ML in BM, the KPI may include the model complexity and computational complexity.

o   FFS: the details of model complexity and computational complexity

Agreement

·        For spatial-domain beam prediction, further study the following options as baseline performance

o   Option 1: Select the best beam within Set A of beams based on the measurement of all RS resources or all possible beams of beam Set A (exhaustive beam sweeping)

§  FFS CSI-RS/SSB as the RS resources

o   Option 2: Select the best beam within Set A of beams based on the measurement of RS resources from Set B of beams

§  FFS: Set B is a subset of Set A and/or Set A consists of narrow beams and Set B consists of wide beams

§  FFS: how conventional scheme to obtain performance KPIs

§  FFS: how to determine the subset of RS resources is reported by companies

o   Other options are not precluded.

 

Decision: As per email decision posted on May 22nd,

Agreement

·        For dataset generation and performance evaluation for AI/ML in beam management, take the parameters (if applicable) in Table 1.2-1b for Dense Urban scenario for SLS

Table 1.2-1b Assumptions for Dense Urban scenario for AI/ML in beam management

Parameters

Values

Frequency Range

FR2 @ 30 GHz

·        SCS: 120 kHz

Deployment

200m ISD,

·        2-tier model with wrap-around (7 sites, 3 sectors/cells per site)

Other deployment assumption is not precluded

Channel mode

UMa with distance-dependent LoS probability function defined in Table 7.4.2-1 in TR 38.901.

System BW

80MHz

UE Speed

·        For spatial domain beam prediction, 3km/h

·        For time domain beam prediction: 30km/h (baseline), 60km/h (optional)

·        Other values are not precluded

UE distribution

·        FFS UEs per sector/cell for evaluation. More UEs per sector/cell for data generation is not precluded.

·        For spatial domain beam prediction: FFS:

o   Option 1: 80% indoor ,20% outdoor as in TR 38.901

o   Option 2: 100% outdoor

·        For time domain prediction: 100% outdoor

Transmission Power

Maximum Power and Maximum EIRP for base station and UE as given by corresponding scenario in 38.802 (Table A.2.1-1 and Table A.2.1-2)

BS Antenna Configuration

·        [One panel: (M, N, P, Mg, Ng) = (4, 8, 2, 1, 1), (dV, dH) = (0.5, 0.5) λ as baseline]

·        [Four panels: (M, N, P, Mg, Ng) = (4, 8, 2, 2, 2), (dV, dH) = (0.5, 0.5) λ. (dg,V, dg,H) = (2.0, 4.0) λ as optional]

·        Other assumptions are not precluded.

 

Companies to explain TXRU weights mapping.

Companies to explain beam selection.

Companies to explain number of BS beams

BS Antenna radiation pattern

TR 38.802 Table A.2.1-6, Table A.2.1-7

UE Antenna Configuration

[Panel structure: (M,N,P) = (1,4,2)]

·        2 panels (left, right) with (Mg, Ng) = (1, 2) as baseline

·        Other assumptions are not precluded

 

Companies to explain TXRU weights mapping.

Companies to explain beam and panel selection.

Companies to explain number of UE beams

UE Antenna radiation pattern

TR 38.802 Table A.2.1-8, Table A.2.1-10

Beam correspondence

Companies to explain beam correspondence assumptions (in accordance to the two types agreed in RAN4)

Link adaptation

Based on CSI-RS

Traffic Model

FFS:

·        Option 1: Full buffer

·        Option 2: FTP model

Other options are not precluded

Inter-panel calibration for UE

Ideal, non-ideal following 38.802 (optional) – Explain any errors

Control and RS overhead

Companies report details of the assumptions

Control channel decoding

Ideal or Non-ideal (Companies explain how it is modelled)

UE receiver type

MMSE-IRC as the baseline, other advanced receiver is not precluded

BF scheme

Companies explain what scheme is used

Transmission scheme

Multi-antenna port transmission schemes

Note: Companies explain details of the using transmission scheme.

Other simulation assumptions

Companies to explain serving TRP selection

Companies to explain scheduling algorithm

Other potential impairments

Not modelled (assumed ideal).

If impairments are included, companies will report the details of the assumed impairments

BS Tx Power

[40 dBm]

Maximum UE Tx Power

23 dBm

BS receiver Noise Figure

7 dB

UE receiver Noise Figure

10 dB

Inter site distance

200m

BS Antenna height

25m

UE Antenna height

1.5 m

Car penetration Loss

38.901, sec 7.4.3.2: μ = 9 dB, σp = 5 dB

 

Agreement

·        For temporal beam prediction, the following options can be considered as a starting point for UE trajectory model for further study. Companies report further changes or modifications based on the following options for UE trajectory model. Other options are not precluded.

o   Option #2: Linear trajectory model with random direction change.

§  UE moving trajectory: UE will move straightly along the selected direction to the end of an time interval, where the length of the time interval is provided by using an exponential distribution with average interval length, e.g., 5s, with granularity of 100 ms.

·        UE moving direction change: At the end of the time interval, UE will change the moving direction with the angle difference A_diff from the beginning of the time interval, provided by using a uniform distribution within [-45°, 45°].

·        UE move straightly within the time interval with the fixed speed.

o   Option #3: Linear trajectory model with random and smooth direction change.

§  UE moving trajectory: UE will change the moving direction by multiple steps within an time internal, where the length of the time interval is provided by using an exponential distribution with average interval length, e.g., 5s, with granularity of 100 ms.

·        UE moving direction change: At the end of the time interval, UE will change the moving direction with the angle difference A_diff from the beginning of the time interval, provided by using a uniform distribution within [-45°, 45°].

·        The time interval is further broken into N sub-intervals, e.g. 100ms per sub-interval, and at the end of each sub-interval, UE change the direction by the angle of A_diff/N. 

·        UE move straightly within the time sub-interval with the fixed speed.

o   Option #4: Random direction straight-line trajectories.

§  Initial UE location, moving direction and speed: UE is randomly dropped in a cell, and an initial moving direction is randomly selected, with a fixed speed.

·        The initial UE location should be randomly drop within the following blue area

where d1 is the minimum distance that UE should be away from the BS.

o   Each sector is a cell and that the cell association is geometry based.

o   During the simulation, inter-cell handover or switching should be disabled.

For training data generation

§  For each UE moving trajectory: the total length of the UE trajectory can be set as T second if it is in time, of set as D meter if it is in distance.

·        The value of T (or D) can be further discussed

·        The trajectory sampling interval granularity depends on UE speed and it can be further discussed.

§  UE can move straightly along the entire trajectory, or

§  UE can move straightly during the time interval, where the time interval is provided by using an exponential distribution with average interval length

·        UE may change the moving direction at the end of the time interval. UE will change the moving direction with the angle difference A_diff from the beginning of the time interval, provided by using a uniform distribution within [-45°, 45°]

§  If the UE trajectory hit the cell boundary (the red line), the trajectory should be terminated.

·        If the trajectory length (in time) is less than the length of observation window + prediction window, the trajectory should be discarded.

·        At the current stage, the length of observation window + prediction window is not fixed and the companies can report their values.

·        Generalization issue is FFS

 

Agreement

·        For temporal beam prediction, further study the following options as baseline performance

o   Option 1a: Select the best beam for T2 within Set A of beams based on the measurements of all the RS resources or all possible beams from Set A of beams at the time instants within T2

o   Option 2: Select the best beam for T2 within Set A of beams based on the measurements of all the RS resources from Set B of beams at the time instants within T1

§  Companies explain the detail on how to select the best beam for T2 from Set A based on the measurements in T1

o   Where T2 is the time duration for the best beam selection, and T1 is a time duration to obtain the measurements of all the RS resource from Set B of beams.

§  T1 and T2 are aligned with those for AI/ML based methods

o   Whether Set A and Set B are the same or different depend on the sub-use case

o   Other options are not precluded.

Agreement

·        For dataset generation and performance evaluation for AI/ML in beam management, take the following assumption for LLS as optional methodology

Parameter

Value

Frequency

30GHz.

Subcarrier spacing

120kHz

Data allocation

[8 RBs] as baseline, companies can report larger number of RBs

First 2 OFDM symbols for PDCCH, and following 12 OFDM symbols for data channel

PDCCH decoding

Ideal or Non-ideal (Companies explain how is oppler)

Channel model

FFS:

LOS channel: CDL-D extension, DS = 100ns

NLOS channel: CDL-A/B/C extension, DS = 100ns

Companies explains details of extension methodology considering spatial consistency

 

Other channel models are not precluded.

BS antenna configurations

·        One panel: (M, N, P, Mg, Ng) = (4, 8, 2, 1, 1), (dV, dH) = (0.5, 0.5) λ as baseline

·        Other assumptions are not precluded.

 

Companies to explain TXRU weights mapping.

Companies to explain beam selection.

Companies to explain number of BS beams

BS antenna element radiation pattern

Same as SLS

BS antenna height and antenna array downtile angle

25m, 110°

UE antenna configurations

Panel structure: (M, N, P) = (1, 4, 2), 

·        2 panels (left, right) with (Mg, Ng) = (1, 2) as baseline

·        1 panel as optional

·        Other assumptions are not precluded

 

Companies to explain TXRU weights mapping.

Companies to explain beam and panel selection.

Companies to explain number of UE beams

UE antenna element radiation pattern

Same as SLS

UE moving speed

Same as SLS

Raw data collection format

Depends on sub-use case and companies’ choice.

 

 

Decision: As per email decision posted on May 25th,

Agreement

·        For UE trajectory model, UE orientation can be independent from UE moving trajectory model. FFS on the details. 

o   Other UE orientation model is not precluded.

Agreement

·        Companies are encouraged to report the following aspects of AI/ML model in RAN 1 #110. FFS on whether some of aspects need be defined or reported.

o   Description of AI/ML model, e.g, NN architecture type

o   Model inputs/outputs (per sub-use case)

o   Training methodology, e.g.

§  Loss function/optimization function

§  Training/ validity /testing dataset:

·        Dataset size, number of training/ validity /test samples

·        Model validity area: e.g., whether model is trained for single sector or multiple sectors

·        Details on Model monitoring and model update, if applicable

o   Others related aspects are not precluded

 

Agreement

·        To evaluate the performance of AI/ML in beam management, further study the following KPI options:

o   Beam prediction accuracy related KPIs, may include the following options:

§  Average L1-RSRP difference of Top-1 predicted beam

§  Beam prediction accuracy (%) for Top-1 and/or Top-K beams, FFS the definition:

·        Option 1: The beam prediction accuracy (%) is the percentage of “the Top-1 predicted beam is one of the Top-K genie-aided beams”

·        Option 2: The beam prediction accuracy (%) is the percentage of “the Top-1 genie-aided beam is one of the Top-K predicted beams”

 

§  CDF of L1-RSRP difference for Top-1 predicted beam

§  Beam prediction accuracy (%) with 1dB margin for Top-1 beam

·        The beam prediction accuracy (%) with 1dB margin is the percentage of the Top-1 predicted beam “whose ideal L1-RSRP is within 1dB of the ideal L1-RSRP of the Top-1 genie-aided beam”

 

§  the definition of L1-RSRP difference of Top-1 predicted beam:

·        the difference between the ideal L1-RSRP of Top-1 predicted beam and the ideal L1-RSRP of the Top-1 genie-aided beam

§  Other beam prediction accuracy related KPIs are not precluded and can be reported by companies.

o   System performance related KPIs, may include the following options:

§  UE throughput: CDF of UE throughput, avg. and 5%ile UE throughput

§  RS overhead reduction at least for spatial-domain beam prediction at least for top-1 beam:

·        1-N/M,

o   where N is the number of beams (with reference signal (SSB and/or CSI-RS)) required for measurement

o   where (FFS) M is the total number of beams

o   Note: Non-AI/ML approach based on the measurement of these M beams may be used as a baseline

·        FFS on whether to define a proper value for M for evaluation.

§  Other System performance related KPIs are not precluded and can be reported by companies.

o   Other KPIs are not precluded and can be reported by companies, for example:

§  Reporting overhead reduction: (FFS) The number of UCI report and UCI payload size, for temporal /spatial prediction

§  Latency reduction:

·        (FFS) (1 – [Total transmission time of N beams] / [Total transmission time of M beams])

o   where N is the number of beams (with reference signal (SSB and/or CSI-RS)) in the input beam set required for measurement

o   where M is the total number of beams

§  Power consumption reduction: FFS on details

 

Final summary in R1-2205641.

9.2.3.2       Other aspects on AI/ML for beam management

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2203143         Discussion on AI/ML for beam management Huawei, HiSilicon

R1-2203251         Discussion on potential enhancements for AI/ML based beam management         ZTE

R1-2203284         Discussions on AI-BM      Ericsson

R1-2203375         Discussion for other aspects on AI/ML for beam management InterDigital, Inc.

R1-2203454         Discussion on other aspects on AI/ML for beam management  CATT

R1-2203553         Other aspects on AI/ML for beam management          vivo

R1-2203691         Discussion on other aspects on AI/ML for beam management  NEC

R1-2203730         Consideration on AI/ML for beam management          Sony

R1-2203811         Other aspects on AI/ML for beam management          xiaomi

R1-2203900         Representative sub use cases for beam management   Samsung

R1-2204018         Other aspects of AI/ML for beam management           OPPO

R1-2204060         Beam management with AI/ML      Beijing Jiaotong University

R1-2204078         Discussion on sub use cases of beam management      Panasonic

R1-2204103         Discussion on sub use cases of AI/ML for beam management use case               FUTUREWEI

R1-2204152         Other aspects on AI/ML for beam management          LG Electronics

R1-2204183         Discussions on AI-ML for Beam management            CAICT

R1-2204241         Enhancement on AI based Beam Management            Apple

R1-2204298         Discussion on other aspects on AI/ML for beam management  CMCC

R1-2204378         Discussion on other aspects on AI/ML for beam management  NTT DOCOMO, INC.

R1-2204420         Further aspects of AI/ML for beam management        Lenovo

R1-2204501         Discussion on other aspects on AI/ML for beam management  Spreadtrum Communications

R1-2204569         Discussions on Sub-Use Cases in AI/ML for Beam Management           TCL Communication

R1-2204574         Other aspects on ML for beam management Nokia, Nokia Shanghai Bell

R1-2204796         Use-cases and specification for beam management     Intel Corporation

R1-2204843         On other aspects of AI and ML for beam management              NVIDIA

R1-2204863         System performance aspects on AI/ML for beam management AT&T

R1-2204938         AI/ML for beam management         Mavenir

R1-2205027         Other aspects on AIML for beam management           Qualcomm Incorporated

R1-2205079         Sub-use cases and spec impact on AI/ML for beam management            Fujitsu Limited

R1-2205094         Discussion on Codebook Enhancement with AI/ML   Charter Communications, Inc

 

[109-e-R18-AI/ML-06] – Zhihua (OPPO)

Email discussion on other aspects of AI/ML for beam management by May 20

-        Check points: May 18

R1-2205252         Summary#1 for other aspects on AI/ML for beam management              Moderator (OPPO)

R1-2205253        Summary#2 for other aspects on AI/ML for beam management       Moderator (OPPO)

From May 17th GTW session

Agreement

For AI/ML-based beam management, support BM-Case1 and BM-Case2 for characterization and baseline performance evaluations

·        BM-Case1: Spatial-domain DL beam prediction for Set A of beams based on measurement results of Set B of beams

·        BM-Case2: Temporal DL beam prediction for Set A of beams based on the historic measurement results of Set B of beams

·        FFS: details of BM-Case1 and BM-Case2

·        FFS: other sub use cases

Note: For BM-Case1 and BM-Case2, Beams in Set A and Set B can be in the same Frequency Range

 

Agreement

Regarding the sub use case BM-Case2, the measurement results of K (K>=1) latest measurement instances are used for AI/ML model input:

·        The value of K is up to companies

Agreement

Regarding the sub use case BM-Case2, AI/ML model output should be F predictions for F future time instances, where each prediction is for each time instance.

·        At least F = 1

·        The other value(s) of F is up to companies

Agreement

For the sub use case BM-Case1, consider both Alt.1 and Alt.2 for further study:

·        Alt.1: AI/ML inference at NW side

·        Alt.2: AI/ML inference at UE side

Agreement

For the sub use case BM-Case2, consider both Alt.1 and Alt.2 for further study:

·        Alt.1: AI/ML inference at NW side

·        Alt.2: AI/ML inference at UE side

 

R1-2205453         Summary#3 for other aspects on AI/ML for beam management              Moderator (OPPO)

Decision: As per email decision posted on May 20th,

Conclusion

For the sub use case BM-Case1, consider the following alternatives for further study:

·        Alt.1: Set B is a subset of Set A

o   FFS: the number of beams in Set A and B

o   FFS: how to determine Set B out of the beams in Set A (e.g., fixed pattern, random pattern, …)

·        Alt.2: Set A and Set B are different (e.g. Set A consists of narrow beams and Set B consists of wide beams)

o   FFS: the number of beams in Set A and B

o   FFS: QCL relation between beams in Set A and beams in Set B

o   FFS: construction of Set B (e.g., regular pre-defined codebook, codebook other than regular pre-defined one)

·        Note1: Set A is for DL beam prediction and Set B is for DL beam measurement.

·        Note2: The narrow and wide beam terminology is for SI discussion only and have no specification impact

·        Note3: The codebook constructions of Set A and Set B can be clarified by the companies.

Conclusion

Regarding the sub use case BM-Case1, further study the following alternatives for AI/ML input:

·        Alt.1: Only L1-RSRP measurement based on Set B

·        Alt.2: L1-RSRP measurement based on Set B and assistance information

o   FFS: Assistance information. The following were mentioned by companions in the discussion:  Tx and/or Rx beam shape information (e.g., Tx and/or Rx beam pattern, Tx and/or Rx beam boresight direction (azimuth and elevation), 3dB beamwidth, etc.), expected Tx and/or Rx beam for the prediction (e.g., expected Tx and/or Rx angle, Tx and/or Rx beam ID for the prediction), UE position information, UE direction information, Tx beam usage information, UE orientation information, etc.

§  Note: The provision of assistance information may be infeasible due to the concern of disclosing proprietary information to the other side.

·        Alt.3: CIR based on Set B

·        Alt.4: L1-RSRP measurement based on Set B and the corresponding DL Tx and/or Rx beam ID

·        Note1: It is up to companies to provide other alternative(s) including the combination of some alternatives

·        Note2: All the inputs are “nominal” and only for discussion purpose.

Conclusion

For the sub use case BM-Case2, further study the following alternatives with potential down-selection:

·        Alt.1: Set A and Set B are different (e.g. Set A consists of narrow beams and Set B consists of wide beams)

o   FFS: QCL relation between beams in Set A and beams in Set B

·        Alt.2: Set B is a subset of Set A (Set A and Set B are not the same)

o   FFS: how to determine Set B out of the beams in Set A (e.g., fixed pattern, random pattern, …)

·        Alt.3: Set A and Set B are the same

·        Note1: Predicted beam(s) are selected from Set A and measured beams used as input are selected from Set B.

·        Note2: It is up to companies to provide other alternative(s)

·        Note3: The narrow and wide beam terminology is for SI discussion only and have no specification impact

Conclusion

Regarding the sub use case BM-Case2, further study the following alternatives of measurement results for AI/ML input (for each past measurement instance):

·        Alt.1: Only L1-RSRP measurement based on Set B

·        Alt 2: L1-RSRP measurement based on Set B and assistance information

o   FFS: Assistance information. The following were mentioned by companies in the discussion:, Tx and/or Rx beam angle, position information, UE direction information, positioning-related measurement (such as Multi-RTT), expected Tx and/or Rx beam/occasion for the prediction (e.g., expected Tx and/or Rx beam angle for the prediction, expected occasions of the prediction), Tx and/or Rx beam shape information (e.g., Tx and/or Rx beam pattern, Tx and/or Rx beam boresight directions (azimuth and elevation), 3dB beamwidth, etc.) , increase ratio of L1-RSRP for best N beams, UE orientation information

§  Note: The provision of assistance information may be infeasible due to the concern of disclosing proprietary information to the other side.

·        Alt.3: L1-RSRP measurement based on Set B and the corresponding DL Tx and/or Rx beam ID

·        Note1: It is up to companies to provide other alternative(s) including the combination of some alternatives

·        Note2: All the inputs are “nominal” and only for discussion purpose.

 

Final summary in R1-2205454.

9.2.4        AI/ML for positioning accuracy enhancement

9.2.4.1       Evaluation on AI/ML for positioning accuracy enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2203554        Evaluation on AI/ML for positioning accuracy enhancement            vivo

·        Select the InF-DH scenario with clutter parameter {density 60%, height 6m, size 2m} as a typical scenario for positioning accuracy enhancement evaluation.

·        Dataset and AI model sharing among different companies should be encouraged.

·        For the purpose of link level and system level evaluation, statistical models (from TR 38.901 and TR 38.857) are utilized to generate dataset for AI/ML based positioning for model training/validation and testing.

o   Field data measured in actual deployment for AI/ML model performance testing should be allowed and encouraged

·        The positioning accuracy performance of AI/ML based positioning should be evaluated under all scenarios.

·        Spatial consistency assumption should be adopted for performance evaluation.

·        Performance related KPIs, such as @50%, @90% positioning accuracy defined in TR 38.857, can be used directly to evaluate the performance gain of AI/ML based positioning.

·        Consider the following different levels of generalization performance for performance evaluation.

o   Generalization performance form one cell to another

o   Generalization performance from one one drop to another

o   Generalization performance from one scenario to another

·        Computational complexity, parameter quantity and training data requirement are three crucial cost-related KPIs for AI/ML based positioning, and should be considered with high priority at the beginning of this study .

·        Support time domain CIR as the model input for AI/ML based positioning.

·        Study further on the benefits of two-step positioning for AI/ML based positioning in terms of positioning accuracy and AI model generalization.

·        Study further on the benefits of fine-tuning for AI/ML based positioning in terms of positioning accuracy and AI model generalization.

Decision: The document is noted.

 

R1-2203144        Evaluation on AI/ML for positioning accuracy enhancement            Huawei, HiSilicon

Proposal 1: For AI/ML-based LOS/NLOS identification evaluation, adopt the normalized Power Delay Profile as the training inputs.

Proposal 2: For AI/ML-based fingerprint positioning evaluation, adopt the Channel Impulse Response as the training inputs.

Proposal 3: For AI/ML-based positioning evaluation, adopt the positioning accuracy and model complexity as the KPIs.

Proposal 4: For heavy NLOS scenarios, spatial consistent channel modeling shall be employed for the evaluation of AI/ML-based fingerprint positioning. Adopt one or both of the following concepts:

·          2D-Filtering method.

·          Interpolation method.

Proposal 5: For AI/ML-based positioning evaluation, adopt IIoT scenario as baseline.

·          A small number of gNB antennas should be evaluated.

Proposal 6: For AI/ML-based LOS/NLOS Identification evaluation, the baseline solution should be aligned with an existing traditional algorithm.

Proposal 7: For AI/ML-based positioning evaluation, training inputs generated from simulation platform should be a baseline.

Proposal 8: AI/ML-based fingerprint positioning should be studied for positioning accuracy enhancements under heavy NLOS conditions in Rel-18.

Proposal 9: For the evaluation of AI/ML-based fingerprint positioning, study the generalization of the AI/ML model for varying environments.

Decision: The document is noted.

 

R1-2203252         Evaluation assumptions on AI/ML for positioning      ZTE

R1-2203285         Evaluations on AI-Pos       Ericsson

R1-2203455         Discussion on evaluation on AI/ML for positioning    CATT

R1-2203812         Initial views on the evaluation on AI/ML for positioning accuracy enhancement               xiaomi

R1-2203901         Evaluation on AI ML for Positioning            Samsung

R1-2204019         Evaluation methodology and preliminary results on AI/ML for positioning accuracy enhancement       OPPO

R1-2204104         Discussion on evaluation of AI/ML for positioning accuracy enhancements use case               FUTUREWEI

R1-2204153         Evaluation on AI/ML for positioning accuracy enhancement    LG Electronics

R1-2204159         Evaluation assumptions and results for AI/ML based positioning            InterDigital, Inc.

R1-2204184         Some discussions on evaluation on AI-ML for positioning accuracy enhancement               CAICT

R1-2204242         Evaluation on AI/ML for positioning accuracy enhancement    Apple

R1-2204299         Discussion on evaluation on AI/ML for positioning accuracy enhancement               CMCC

R1-2204421         Discussion on AI/ML Positioning Evaluations            Lenovo

R1-2204575         Evaluation on ML for positioning accuracy enhancement         Nokia, Nokia Shanghai Bell

R1-2204837         Evaluation on AI/ML for positioning accuracy enhancement    Fraunhofer IIS, Fraunhofer HHI

R1-2204844         On evaluation assumptions of AI and ML for positioning enhancement NVIDIA

R1-2205028         Evaluation on AIML for positioning accuracy enhancement     Qualcomm Incorporated

R1-2205066         Initial view on AI/ML application to positioning use cases       Rakuten Mobile

R1-2205080         Discussion on Evaluation related issues for AI/ML for positioning accuracy enhancement       Fujitsu Limited

 

[109-e-R18-AI/ML-07] – Yufei (Ericsson)

Email discussion on evaluation of AI/ML for positioning accuracy enhancement by May 20

-        Check points: May 18

R1-2205217         Summary #1 of [109-e-R18-AI/ML-07] Email discussion on evaluation of AI/ML for positioning accuracy enhancement  Moderator (Ericsson)

R1-2205218         Summary #2 of [109-e-R18-AI/ML-07] Email discussion on evaluation of AI/ML for positioning accuracy enhancement  Moderator (Ericsson)

R1-2205219        Summary #3 of [109-e-R18-AI/ML-07] Email discussion on evaluation of AI/ML for positioning accuracy enhancement       Moderator (Ericsson)

From May 17th GTW session

Agreement

The IIoT indoor factory (InF) scenario is a prioritized scenario for evaluation of AI/ML based positioning.

 

Agreement

For evaluation of AI/ML based positioning, at least the InF-DH sub-scenario is prioritized in the InF deployment scenario for FR1 and FR2.

 

Agreement

For InF-DH channel, the prioritized clutter parameters {density, height, size} are:

·        {60%, 6m, 2m};

·        {40%, 2m, 2m}.

o   Note: an individual company may treat {40%, 2m, 2m} as optional in their evaluation considering their specific AI/ML design.

Agreement

For evaluation of AI/ML based positioning, reuse the common scenario parameters defined in Table 6-1 of TR 38.857.

 

Agreement

For evaluation of InF-DH scenario, the parameters are modified from TR 38.857 Table 6.1-1 as shown in the table below.

·        The parameters in the table are applicable to InF-DH at least. If another InF sub-scenario is prioritized in addition to InF-DH, some parameters in the table below may be updated.

Parameters common to InF scenario (Modified from TR 38.857 Table 6.1-1)

 

FR1 Specific Values

FR2 Specific Values

Channel model

InF-SH, InF-DH

InF-SH, InF-DH

Layout

Hall size

InF-DH:

(baseline) 120x60 m

(optional) 300x150 m

BS locations

18 BSs on a square lattice with spacing D, located D/2 from the walls.

-              for the small hall (L=120m x W=60m): D=20m

-              for the big hall (L=300m x W=150m): D=50m

 

Room height

10m

Total gNB TX power, dBm

24dBm

24dBm

EIRP should not exceed 58 dBm

gNB antenna configuration

(M, N, P, Mg, Ng) = (4, 4, 2, 1, 1), dH=dV=0.5λ – Note 1

Note: Other gNB antenna configurations are not precluded for evaluation

(M, N, P, Mg, Ng) = (4, 8, 2, 1, 1), dH=dV=0.5λ – Note 1

One TXRU per polarization per panel is assumed

gNB antenna radiation pattern

Single sector – Note 1

3-sector antenna configuration – Note 1

Penetration loss

0dB

Number of floors

1

UE horizontal drop procedure

Uniformly distributed over the horizontal evaluation area for obtaining the CDF values for positioning accuracy, The evaluation area should be selected from

- the convex hull of the horizontal BS deployment.

- the whole hall area if the CDF values for positioning accuracy is obtained from whole hall area.

FFS: which of the above should be baseline.

FFS: if an optional evaluation area is needed

UE antenna height

Baseline: 1.5m

(Optional): uniformly distributed within [0.5, X2]m, where X2 = 2m for scenario 1(InF-SH) and X2= for scenario 2 (InF-DH) 

FFS: if the optional UE antenna height is needed

UE mobility

3km/h

Min gNB-UE distance (2D), m

0m

gNB antenna height

Baseline: 8m

(Optional): two fixed heights, either {4, 8} m, or {max(4,), 8}.

FFS: if the optional gNB antenna height is needed

Clutter parameters: {density , height ,size }

High clutter density:

- {40%, 2m, 2m}

- {60%, 6m, 2m}

o   Note: an individual company may treat {40%, 2m, 2m} as optional in their evaluation considering their specific AI/ML design.

Note 1:       According to Table A.2.1-7 in TR 38.802

 

Agreement

For AI/ML-based positioning evaluation, the baseline performance to compare against is that of existing Rel-16/Rel-17 positioning methods.

·        As a starting point, each participating company report the specific existing positioning method (e.g., DL-TDOA, Multi-RTT) used as comparison.

Agreement

For all scenarios and use cases, the main KPI is the CDF percentiles of horizonal accuracy.

·        Companies can optionally report vertical accuracy.

Agreement

The CDF percentiles to analyse are: {50%, 67%, 80%, 90%}.

·        90% is the baseline. {50%, 67% 80%} are optional.

Agreement

Target positioning requirements for horizonal accuracy and vertical accuracy are not defined for AI/ML-based positioning evaluation.

 

Agreement

For evaluation of AI/ML based positioning, the KPI include the model complexity and computational complexity.

·        FFS: the details of model complexity and computational complexity

Agreement

Synthetic dataset generated according to the statistical channel models in TR38.901 is used for model training, validation, and testing.

 

Agreement

The dataset is generated by a system level simulator based on 3GPP simulation methodology.

 

Agreement

As a starting point, the training, validation and testing dataset are from the same large-scale and small-scale propagation parameters setting. Subsequent evaluation can study the performance when the training dataset and testing dataset are from different settings.

 

Agreement

For AI/ML-based positioning evaluation, RAN1 does not attempt to define any common AI/ML model as a baseline.

 

R1-2205480         Summary #4 of [109-e-R18-AI/ML-07] Email discussion on evaluation of AI/ML for positioning accuracy enhancement  Moderator (Ericsson)

R1-2205481        Summary #5 of [109-e-R18-AI/ML-07] Email discussion on evaluation of AI/ML for positioning accuracy enhancement       Moderator (Ericsson)

Decision: As per email decision posted on May 20th,

Agreement

The entry “UE horizontal drop procedure” in the simulation parameter table for InF is updated to the following.

UE horizontal drop procedure

Uniformly distributed over the horizontal evaluation area for obtaining the CDF values for positioning accuracy, The evaluation area should be selected from

- (baseline) the whole hall area, and the CDF values for positioning accuracy is obtained from whole hall area.

- (optional) the convex hull of the horizontal BS deployment, and the CDF values for positioning accuracy is obtained from the convex hull.

 

Agreement

The entries “UE antenna height” and “gNB antenna height” in the simulation parameter table for InF is updated to the following.

UE antenna height

Baseline: 1.5m

(Optional): uniformly distributed within [0.5, X2]m, where X2 = 2m for scenario 1(InF-SH) and X2= for scenario 2 (InF-DH) 

gNB antenna height

Baseline: 8m

(Optional): two fixed heights, either {4, 8} m, or {max(4,), 8}.

 

Agreement

If spatial consistency is enabled for the evaluation, companies model at least one of: large scale parameters, small scale parameters and absolute time of arrival, where

·        the large scale parameters are according to Section 7.5 of TR 38.901 and correlation distance =  for InF (Section 7.6.3.1 of TR 38.901)

·        the small scale parameters are according to Section 7.6.3.1 of TR 38.901

·        the absolute time of arrival is according to Section 7.6.9 of TR 38.901

Agreement

If spatial consistency is enabled for the evaluation of AI/ML based positioning, the baseline evaluation does not incorporate spatially consistent UT/BS mobility modelling (Section 7.6.3.2 of TR 38.901).

·        It is optional to implement spatially consistent UT/BS mobility modelling (Section 7.6.3.2 of TR 38.901).

Agreement

For evaluation of AI/ML based positioning, companies are encouraged to evaluate the model generalization.

·        FFS: the metrics for evaluating the model generalization (e.g., model performance based on agreed KPIs under different settings)

 

Decision: As per email decision posted on May 25th,

Agreement

Companies are encouraged to provide evaluation results for:

 

Agreement

When reporting evaluation results with direct AI/ML positioning and/or AI/ML assisted positioning, proponent company is expected to describe if a one-sided model or a two-sided model is used.

·        If one-sided model (i.e., UE-side model or network-side model), the proponent company report which side the model inference is performed (e.g. UE, network), and any details specific to the side that performs the AI/ML model inference.

·        If two-sided model, the proponent company report which side (e.g., UE, network) performs the first part of interference, and which side (e.g., network, UE) performs the remaining part of the inference.

Agreement

For evaluation of AI/ML based positioning, the computational complexity can be reported via the metric of floating point operations (FLOPs).

·        Note: For AI/ML assisted methods, computational complexity for the AI/ML model is only one component of the overall complexity for estimating the UE’s location.

·        Note: Other metrics to measure the computational complexity are not precluded.

Agreement

For evaluation of AI/ML based positioning, details of the training dataset generation are to be reported by proponent company. The report may include (in addition to other selected settings, if applicable):

·        The size of training dataset, for example, the total number of UEs in the evaluation area for generating training dataset;

·        The distribution of UE location for generating the training dataset may be one of the following:

o   Option 1: grid distribution, i.e., one training data is collected at the center of one small square grid, where, for example, the width of the square grid can be 0.25/0.5/1.0 m.

o   Option 2: uniform distribution, i.e., the UE location is randomly and uniformly distributed in the evaluation area.

 

Final summary in R1-2205633.

9.2.4.2       Other aspects on AI/ML for positioning accuracy enhancement

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2203145         Discussion on AI/ML for positioning accuracy enhancement   Huawei, HiSilicon

R1-2203253         Discussion on potential enhancements for AI/ML based positioning      ZTE

R1-2203286         Discussions on AI-Pos      Ericsson

R1-2203456         Discussion on other aspects on AI/ML for positioning              CATT

R1-2203555         Other aspects on AI/ML for positioning accuracy enhancement              vivo

R1-2203692         Discussion on other aspects on AI/ML for positioning accuracy enhancement               NEC

R1-2203731         Considerations on AI/ML for positioning accuracy enhancement            Sony

R1-2203813         Initial views on the other aspects of AI/ML-based positioning accuracy enhancement               xiaomi

R1-2203902         Representative sub use cases for Positioning Samsung

R1-2204020         On sub use cases and other aspects of AI/ML for positioning accuracy enhancement               OPPO

R1-2204105         Discussion on sub use cases of AI/ML for positioning accuracy enhancements use case       FUTUREWEI

R1-2204154         Other aspects on AI/ML for positioning accuracy enhancement              LG Electronics

R1-2204160         Potential specification impacts for AI/ML based positioning    InterDigital, Inc.

R1-2204185         Discussions on AI-ML for positioning accuracy enhancement CAICT

R1-2204243         Discussion on other aspects on AI/ML for positioning accuracy enhancement               Apple

R1-2204300         Discussion on other aspects on AI/ML for positioning accuracy enhancement               CMCC

R1-2204422         AI/ML Positioning use cases and Associated Impacts Lenovo

R1-2204576         Other aspects on ML for positioning accuracy enhancement     Nokia, Nokia Shanghai Bell

R1-2204798         Use-cases and specification for positioning   Intel Corporation

R1-2204838         On potential specification impact of AI/ML for positioning      Fraunhofer IIS, Fraunhofer HHI

R1-2204845         On other aspects of AI and ML for positioning enhancement   NVIDIA

R1-2205029         Other aspects on AIML for positioning accuracy enhancement Qualcomm Incorporated

R1-2205081         Sub-use cases and spec impacts for AI/ML for positioning accuracy enhancement               Fujitsu Limited

 

[109-e-R18-AI/ML-08] – Huaming (vivo)

Email discussion on other aspects of AI/ML for positioning accuracy enhancement by May 20

-        Check points: May 18

R1-2205229        Discussion summary #1 of [109-e-R18-AI/ML-08]  Moderator (vivo)

From May 18th GTW session

Agreement

Study further on sub use cases and potential specification impact of AI/ML for positioning accuracy enhancement considering various identified collaboration levels.

·        Companies are encouraged to identify positioning specific aspects on collaboration levels if any in agenda 9.2.4.2.

·        Note1: terminology, notation and common framework of Network-UE collaboration levels are to be discussed in agenda 9.2.1 and expected to be applicable to AI/ML for positioning accuracy enhancement.

·        Note2: not every collaboration level may be applicable to an AI/ML approach for a sub use case

Agreement

For further study, at least the following aspects of AI/ML for positioning accuracy enhancement are considered.

·        Direct AI/ML positioning: the output of AI/ML model inference is UE location

o   E.g., fingerprinting based on channel observation as the input of AI/ML model

o   FFS the details of channel observation as the input of AI/ML model, e.g. CIR, RSRP and/or other types of channel observation

o   FFS: applicable scenario(s) and AI/ML model generalization aspect(s)

·        AI/ML assisted positioning: the output of AI/ML model inference is new measurement and/or enhancement of existing measurement

o   E.g., LOS/NLOS identification, timing and/or angle of measurement, likelihood of measurement

o   FFS the details of input and output for corresponding AI/ML model(s)

o   FFS: applicable scenario(s) and AI/ML model generalization aspect(s)

·        Companies are encouraged to clarify all details/aspects of their proposed AI/ML approaches/sub use case(s) of AI/ML for positioning accuracy enhancement

 

Agreement

Companies are encouraged to study and provide inputs on potential specification impact at least for the following aspects of AI/ML approaches for sub use cases of AI/ML for positioning accuracy enhancement.

·        AI/ML model training

o   training data type/size

o   training data source determination (e.g., UE/PRU/TRP)

o   assistance signalling and procedure for training data collection

·        AI/ML model indication/configuration

o   assistance signalling and procedure (e.g., for model configuration, model activation/deactivation, model recovery/termination, model selection)

·        AI/ML model monitoring and update

o   assistance signalling and procedure (e.g., for model performance monitoring, model update/tuning)

·        AI/ML model inference input

o   report/feedback of model input for inference (e.g., UE feedback as input for network side model inference)

o   model input acquisition and pre-processing

o   type/definition of model input

·        AI/ML model inference output

o   report/feedback of model inference output

o   post-processing of model inference output

·        UE capability for AI/ML model(s) (e.g., for model training, model inference and model monitoring)

·        Other aspects are not precluded

·        Note: not all aspects may apply to an AI/ML approach in a sub use case

·        Note2: the definitions of common AI/ML model terminologies are to be discussed in agenda 9.2.1

 

Final summary in R1-2205498.

9.2.55        Other

R1-2203254         Discussion on other use cases for AI/ML      ZTE

R1-2203405         Discussions on AI-ML challenges and limitations      New H3C Technologies Co., Ltd.

R1-2203457         Views on UE capability of AI/ML for air interface     CATT

R1-2203556         Discussions on AI/ML for DMRS   vivo

R1-2203670         Draft skeleton of TR 38.843            Ericsson

R1-2204577         On ML capability exchange, interoperability, and testability aspects      Nokia, Nokia Shanghai Bell

R1-2204846         GPU hosted 5G virtual RAN baseband processing and AI applications  NVIDIA

R1-2204911         Discussion on other potential use cases of AI/ML for NR air interface   Huawei, HiSilicon

R1-2205067         Consideration on UE processing capability for AI/ML utilization           Rakuten Mobile


 RAN1#110

9.2       Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface

Please refer to RP-221348 for detailed scope of the SI.

 

R1-2208145        Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface)            Ad-hoc Chair (CMCC)

 

[110-R18-AI/ML] Email to be used for sharing updates on online/offline schedule, details on what is to be discussed in online/offline sessions, tdoc number of the moderator summary for online session, etc – Taesang (Qualcomm)

 

R1-2207222         Technical report for Rel-18 SI on AI and ML for NR air interface          Qualcomm Incorporated

TR 38.843

9.2.1        General aspects of AI/ML framework

Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.

 

R1-2205752         Continued discussion on common AI/ML characteristics and operations               FUTUREWEI

R1-2205830         General aspects of dataset construction         Keysight Technologies UK Ltd

R1-2205889         Discussion on general aspects of AI/ML framework   Huawei, HiSilicon

R1-2205966         Discussions on Common Aspects of AI/ML Framework           TCL Communication

R1-2206031         Discussions on AI/ML framework  vivo

R1-2206067         Discussion on general aspects of common AI PHY framework ZTE

R1-2206113         Considerations on common AI/ML framework           Sony

R1-2206163         Discussion on general aspects of AI/ML framework   Fujitsu

R1-2206194         On General Aspects of AI/ML Framework   Google

R1-2206314         On general aspects of AI/ML framework      OPPO

R1-2206390         AI/ML framework for air interface CATT

R1-2206466         Discussion on general aspects of AI ML framework   NEC

R1-2206507         General aspects of AI and ML framework for NR air interface NVIDIA

R1-2206509         General aspects of AI/ML framework           Lenovo

R1-2206577         General aspects of AI/ML framework           Intel Corporation

R1-2206603         Discussion on general aspects of AIML framework    Spreadtrum Communications

R1-2206634         Views on the general aspects of AL/ML framework   Xiaomi

R1-2206674         Considerations on general aspects on AI-ML framework          CAICT

R1-2206686         Discussion on general aspects of AI/ML for NR air interface   China Telecom

R1-2206819         General aspects of AI ML framework and evaluation methodogy           Samsung

R1-2206873         General aspects on AI/ML framework           LG Electronics

R1-2206885         Discussion on general aspects of AI/ML framework   Ericsson

R1-2206901         Discussion on general aspects of AI/ML framework   CMCC

R1-2206952         Discussion on general aspects of AI/ML framework for NR air interface               ETRI

R1-2206967         Further discussion on the general aspects of ML for Air-interface          Nokia, Nokia Shanghai Bell

R1-2206987         General aspects of AI/ML framework           MediaTek Inc.

R1-2207117         Discussion on AI/ML Model Life Cycle Management Rakuten Mobile, Inc

R1-2207223         General aspects of AI/ML framework           Qualcomm Incorporated

R1-2207293         Discussion on general aspects of AI/ML framework   Panasonic

R1-2207327         General aspect of AI/ML framework             Apple

R1-2207400         Discussion on general aspects of AI/ML framework   NTT DOCOMO, INC.

R1-2207457         Observation of Channel Matrix       Sharp

R1-2207459         Discussion on general aspects of AI/ML framework   KDDI Corporation

 

R1-2207879        Summary#1 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

From Monday session

Agreement

Study the following aspects, including the definition of components (if needed) and necessity, in Life Cycle Management

Note: Some aspects in the list may not have specification impact.

Note: Aspects with square brackets are tentative and pending terminology definition.

Note: More aspects may be added as study progresses.

 

 

R1-2207932        Summary#2 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

 

R1-2208063        Summary#3 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

Agreement

The following is an initial list of common KPIs (if applicable) for evaluating performance benefits of AI/ML

·        Performance

o   Intermediate KPIs

o   Link and system level performance

o   Generalization performance

·        Over-the-air Overhead

o   Overhead of assistance information

o   Overhead of data collection

o   Overhead of model delivery/transfer

o   Overhead of other AI/ML-related signaling

·        Inference complexity

o   Computational complexity of model inference: FLOPs

o   Computational complexity for pre- and post-processing

o   Model complexity: e.g., the number of parameters and/or size (e.g. Mbyte)

·        Training complexity

·        LCM related complexity and storage overhead

o   FFS: specific aspects

·        FFS: Latency, e.g., Inference latency

Note: Other aspects may be added in the future, e.g. training related KPIs

Note: Use-case specific KPIs may be additionally considered for the given use-case.

 

Working Assumption

Terminology

Description

Online training

An AI/ML training process where the model being used for inference) is (typically continuously) trained in (near) real-time with the arrival of new training samples.

Note: the notion of (near) real-time vs. non real-time is context-dependent and is relative to the inference time-scale.

Note: This definition only serves as a guidance. There may be cases that may not exactly conform to this definition but could still be categorized as online training by commonly accepted conventions.

Note: Fine-tuning/re-training may be done via online or offline training. (This note could be removed when we define the term fine-tuning.)

Offline training

An AI/ML training process where the model is trained based on collected dataset, and where the trained model is later used or delivered for inference.

Note: This definition only serves as a guidance. There may be cases that may not exactly conform to this definition but could still be categorized as offline training by commonly accepted conventions.

 

Note: It is encouraged for the 3gpp discussion to proceed without waiting for online/offline training terminologies.

 

R1-2208178        Summary#4 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

Working Assumption

Include the following into a working list of terminologies to be used for RAN1 AI/ML air interface SI discussion.

Terminology

Description

AI/ML model delivery

A generic term referring to delivery of an AI/ML model from one entity to another entity in any manner.

Note: An entity could mean a network node/function (e.g., gNB, LMF, etc.), UE, proprietary server, etc.

 

Note: Companies are encouraged to bring discussions on various options and their views on how to define Level y/z boundary in the next RAN1 meeting.

9.2.2        AI/ML for CSI feedback enhancement

9.2.2.1       Evaluation on AI/ML for CSI feedback enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2205890         Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2206032         Evaluation on AI/ML for CSI feedback enhancement vivo

R1-2206068         Evaluation on AI for CSI feedback enhancement        ZTE

R1-2206164         Evaluation on AI/ML for CSI feedback enhancement Fujitsu

R1-2206195         On Evaluation of AI/ML based CSI Google

R1-2206315         Evaluation methodology and preliminary results on AI/ML for CSI feedback enhancement       OPPO

R1-2206334         Evaluation on AI/ML-based CSI feedback enhancement           BJTU

R1-2206336         Continued discussion on evaluation of AI/ML for CSI feedback enhancement               FUTUREWEI

R1-2206391         Evaluation on AI/ML for CSI feedback         CATT

R1-2206510         Evaluation on AI/ML for CSI feedback         Lenovo

R1-2206520         Evaluation of AI and ML for CSI feedback enhancement         NVIDIA

R1-2206578         Evaluation for CSI feedback enhancements  Intel Corporation

R1-2206604         Discussion on evaluation on AIML for CSI feedback enhancement        Spreadtrum Communications, BUPT

R1-2206635         Discussion on evaluation on AI/ML for CSI feedback enhancement       Xiaomi

R1-2206675         Some discussions on evaluation on AI-ML for CSI feedback   CAICT

R1-2206820         Evaluation on AI ML for CSI feedback enhancement Samsung

R1-2206874         Evaluation on AI/ML for CSI feedback enhancement LG Electronics

R1-2206902         Discussion on evaluation on AI/ML for CSI feedback enhancement       CMCC

R1-2206953         Evaluation on AI/ML for CSI feedback enhancement ETRI

R1-2206968         Evaluation of ML for CSI feedback enhancement       Nokia, Nokia Shanghai Bell

R1-2206988         Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.

R1-2207063         On evaluation of AI/ML based methods for CSI feedback enhancement Fraunhofer IIS, Fraunhofer HHI           (Late submission)

R1-2207081         Views on Evaluation of AI/ML for CSI Feedback Enhancement             Mavenir

R1-2207152         Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2207224         Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated

R1-2207328         Evaluation on AI/ML for CSI feedback         Apple

R1-2207401         Discussion on evaluation on AI/ML for CSI feedback enhancement       NTT DOCOMO, INC.

R1-2207475         Evaluation on AI/ML for CSI feedback enhancement in spatial-frequency-time domain  SEU       (rev of R1-2205824)

R1-2207720         Evaluations of AI-CSI       Ericsson (rev of R1-2206883)

 

R1-2207836        Summary#1 for CSI evaluation of [110-R18-AI/ML]            Moderator (Huawei)

From Monday session

Agreement

The following cases are considered for verifying the generalization performance of an AI/ML model over various scenarios/configurations as a starting point:

 

R1-2207837        Summary#2 for CSI evaluation of [110-R18-AI/ML]            Moderator (Huawei)

From Tuesday session, previous agreement is completed as follows

Agreement

The following cases are considered for verifying the generalization performance of an AI/ML model over various scenarios/configurations as a starting point:

·        Case 1: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model performs inference/test on a dataset from the same Scenario#A/Configuration#A

·        Case 2: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model performs inference/test on a different dataset than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B

·        Case 3: The AI/ML model is trained based on training dataset constructed by mixing datasets from multiple scenarios/configurations including Scenario#A/Configuration#A and a different dataset than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B, and then the AI/ML model performs inference/test on a dataset from a single Scenario/Configuration from the multiple scenarios/configurations, e.g.,  Scenario#A/Configuration#A, Scenario#B/Configuration#B, Scenario#A/Configuration#B.

o   Note: Companies to report the ratio for dataset mixing

o   Note: number of the multiple scenarios/configurations can be larger than two

·        FFS the detailed set of scenarios/configurations

·        FFS other cases for generalization verification, e.g.,

o   Case 2A: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model is updated based on a fine-tuning dataset different than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B. After that, the AI/ML model is tested on a different dataset than Scenario#A/Configuration#A, e.g., subject to Scenario#B/Configuration#B, Scenario#A/Configuration#B.

 

R1-2207838        Summary#3 for CSI evaluation of [110-R18-AI/ML]            Moderator (Huawei)

Agreement

For the evaluation of the AI/ML based CSI feedback enhancement, if the GCS/SGCS is adopted as the intermediate KPI as part of the ‘Evaluation Metric’, between GCS and SGCS, SGCS is adopted.

 

Agreement

For CSI enhancement evaluations, to verify the generalization performance of an AI/ML model over various scenarios, the set of scenarios are considered focusing on one or more of the following aspects as a starting point:

·        Various deployment scenarios (e.g., UMa, UMi, InH)

·        Various outdoor/indoor UE distributions for UMa/UMi (e.g., 10:0, 8:2, 5:5, 2:8, 0:10)

·        Various carrier frequencies (e.g., 2GHz, 3.5GHz)

·        Other aspects of scenarios are not precluded, e.g., various antenna spacing, various antenna virtualization (TxRU mapping), various ISDs, various UE speeds, etc.

·        Companies to report the selected scenarios for generalization verification

Conclusion

If the AI/ML based CSI prediction sub use cases is to be selected as a sub use case, consider CSI prediction involving temporal domain as a starting point.

 

 

R1-2207839        Summary#4 for CSI evaluation of [110-R18-AI/ML]            Moderator (Huawei)

Agreement

For CSI enhancement evaluations, to verify the generalization/scalability performance of an AI/ML model over various configurations (e.g., which may potentially lead to different dimensions of model input/output), the set of configurations are considered focusing on one or more of the following aspects as a starting point:

·        Various bandwidths (e.g., 10MHz, 20MHz) and/or frequency granularities, (e.g., size of subband)

·        Various sizes of CSI feedback payloads, FFS candidate payload number

·        Various antenna port layouts, e.g., (N1/N2/P) and/or antenna port numbers (e.g., 32 ports, 16 ports)

·        Other aspects of configurations are not precluded, e.g., various numerologies, various rank numbers/layers, etc.

·        Companies to report the selected configurations for generalization verification

·        Companies are encouraged to report the method to achieve generalization over various configurations to achieve scalability of the AI/ML input/output, including pre-processing, post-processing, etc.

Conclusion

For the evaluation of the AI/ML based CSI feedback enhancement, for ‘Channel estimation’, it is up to companies to choose the error modeling method for realistic channel estimation and report by willingness.

·        Note: It is not precluded that companies use ideal channel to calibrate

Agreement

For the evaluation of the AI/ML based CSI feedback enhancement, the throughput in the ‘Evaluation Metric’ includes average UPT, 5%ile UE throughput, and CDF of UPT.

 

Agreement

For the evaluation of the AI/ML based CSI compression sub use cases, companies are encouraged to report the specific quantization/dequantization method, e.g., vector quantization, scalar quantization, etc.

 

Agreement

For the evaluation of the AI/ML based CSI compression sub use cases, the capability/complexity related KPIs, including FLOPs as well as AI/ML model size and/or number of AI/ML parameters, are to be reported separately for the CSI generation part and the CSI reconstruction part.

 

Conclusion

If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, a one-sided structure is considered as a starting point, where the AI/ML inference is performed at either gNB or UE.

 

Conclusion

If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, for evaluation,

·        100% outdoor UE is assumed for UE distribution.

o   FFS: whether to add O2I car penetration loss per TS 38.901 if the simulation assumes UEs inside vehicles

·        UE speed is assumed for evaluation with 10, 20, 30, 60, 120km/h

o   Note: Companies to report the set/subset of speeds

·        5ms CSI feedback periodicity is taken as baseline, while other CSI feedback periodicity values can be reported for the EVM

Conclusion

If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, companies are encouraged to report the details of their models for evaluation, including:

·        The structure of the AI/ML model, e.g., type (FCN, RNN, CNN,…), the number of layers, branches, format of parameters, etc.

·        The input CSI type, e.g., raw channel matrix, eigenvector(s) of the raw channel matrix, feedback CSI information, etc.

·        The output CSI type, e.g., channel matrix, eigenvector(s), feedback CSI information, etc.

·        Data pre-processing/post-processing

·        Loss function

·        Others are not precluded

 

Final summary in R1-2207840.

9.2.2.2       Other aspects on AI/ML for CSI feedback enhancement

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2205891         Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2205967         Discussions on Sub-Use Cases in AI/ML for CSI Feedback Enhancement            TCL Communication

R1-2206033         Other aspects on AI/ML for CSI feedback enhancement           vivo

R1-2206069         Discussion on other aspects for AI CSI feedback enhancement ZTE

R1-2206114         Considerations on CSI measurement enhancements via AI/ML Sony

R1-2206165         Discussion on other aspects of AI/ML for CSI feedback enhancement   Fujitsu

R1-2206185         Discussion on AI/ML for CSI feedback enhancement Panasonic

R1-2206196         On Enhancement of AI/ML based CSI           Google

R1-2206241         Discussion on AI/ML for CSI feedback enhancement NEC

R1-2206316         On sub use cases and other aspects of AI/ML for CSI feedback enhancement               OPPO

R1-2206337         Continued discussion on other aspects of AI/ML for CSI feedback enhancement               FUTUREWEI

R1-2206392         Other aspects on AI/ML for CSI feedback    CATT

R1-2206511         Further aspects of AI/ML for CSI feedback  Lenovo

R1-2206521         AI and ML for CSI feedback enhancement   NVIDIA

R1-2206579         Use-cases and specification for CSI feedback              Intel Corporation

R1-2206605         Discussion on other aspects on AIML for CSI feedback            Spreadtrum Communications

R1-2206636         Discussion on potential specification impact for CSI feedback based on AI/ML               Xiaomi

R1-2206676         Discussions on AI-ML for CSI feedback       CAICT

R1-2206687         Discussion on AI/ML for CSI feedback enhancement China Telecom

R1-2206821         Representative sub use cases for CSI feedback enhancement    Samsung

R1-2206875         Other aspects on AI/ML for CSI feedback enhancement           LG Electronics

R1-2206884         Discussion on AI-CSI        Ericsson

R1-2206903         Discussion on other aspects on AI/ML for CSI feedback enhancement  CMCC

R1-2206954         Discussion on other aspects on AI/ML for CSI feedback enhancement  ETRI

R1-2206969         Other aspects on ML for CSI feedback enhancement  Nokia, Nokia Shanghai Bell

R1-2206989         Other aspects on AI/ML for CSI feedback enhancement           MediaTek Inc.

R1-2207153         Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2207225         Other aspects on AI/ML for CSI feedback enhancement           Qualcomm Incorporated

R1-2207329         Other aspects on AI/ML for CSI      Apple

R1-2207370         Sub-use cases for AI/ML feedback enhancements       AT&T

R1-2207402         Discussion on other aspects on AI/ML for CSI feedback enhancement  NTT DOCOMO, INC.

 

R1-2207780         Summary #1 on other aspects of AI/ML for CSI enhancement Moderator (Apple)

R1-2207853        Summary #2 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From Tuesday session

Agreement

In CSI compression using two-sided model use case, the following AI/ML model training collaborations will be further studied:

·        Type 1: Joint training of the two-sided model at a single side/entity, e.g., UE-sided or Network-sided.

·        Type 2: Joint training of the two-sided model at network side and UE side, repectively.

·        Type 3: Separate training at network side and UE side, where the UE-side CSI generation part and the network-side CSI reconstruction part are trained by UE side and network side, respectively.

·        Note: Joint training means the generation model and reconstruction model should be trained in the same loop for forward propagation and backward propagation. Joint training could be done both at single node or across multiple nodes (e.g., through gradient exchange between nodes).

·        Note: Separate training includes sequential training starting with UE side training, or sequential training starting with NW side training [, or parallel training] at UE and NW

·        Other collaboration types are not excluded.

 

R1-2207854        Summary #3 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

Conclusion

CSI-RS configuration and overhead reduction is NOT selected as one representative sub-use case for CSI feedback enhancement use case.

 

Conclusion

Resource allocation and scheduling is NOT selected as one representative sub-use case for CSI feedback enhancement use case.

 

Agreement

In CSI compression using two-sided model use case, further study potential specification impact on CSI report, including at least

·        CSI generation model output and/or CSI reconstruction model input, including configuration(size/format) and/or potential post/pre-processing of CSI generation model output/CSI reconstruction model input.

·        CQI determination

·        RI determination

 

R1-2208077        Summary #4 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

Agreement

In CSI compression using two-sided model use case, further study potential specification impact on output CSI, including at least

·        Model output type/dimension/configuration and potential post processing

Agreement

In CSI compression using two-sided model use case, further discuss at least the following aspects, including their necessity/feasibility/potential specification impact,  for data collection for AI/ML model training/inference/update/monitoring:

·        Assistance signaling for UE’s data collection

·        Assistance signaling for gNB’s data collection

·        Delivery of the datasets

9.2.3        AI/ML for beam management

9.2.3.1       Evaluation on AI/ML for beam management

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2205753         Continued discussion on evaluation of AI/ML for beam management               FUTUREWEI

R1-2205892         Evaluation on AI/ML for beam management Huawei, HiSilicon

R1-2206034         Evaluation on AI/ML for beam management vivo

R1-2206070         Evaluation on AI for beam management       ZTE

R1-2206166         Evaluation on AI/ML for beam management Fujitsu

R1-2206181         Discussion for evaluation on AI/ML for beam management     InterDigital, Inc.

R1-2206197         On Evaluation of AI/ML based Beam Management    Google

R1-2206250         Evaluation of AI/ML based beam management           Rakuten Mobile, Inc

R1-2206317         Evaluation methodology and preliminary results on AI/ML for beam management               OPPO

R1-2206393         Evaluation on AI/ML for beam management CATT

R1-2206512         Evaluation on AI/ML for beam management Lenovo

R1-2206522         Evaluation of AI and ML for beam management         NVIDIA

R1-2206580         Evaluation for beam management   Intel Corporation

R1-2206637         Evaluation on AI/ML for beam management Xiaomi

R1-2206677         Some discussions on evaluation on AI-ML for Beam management         CAICT

R1-2206688         Evaluation on AI/ML for beam management China Telecom

R1-2206822         Evaluation on AI ML for Beam management              Samsung

R1-2206876         Evaluation on AI/ML for beam management LG Electronics

R1-2206904         Discussion on evaluation on AI/ML for beam management      CMCC

R1-2206938         Evaluation on AI/ML for beam management Ericsson

R1-2206970         Evaluation of ML for beam management      Nokia, Nokia Shanghai Bell

R1-2206990         Evaluation on AI/ML for beam management MediaTek Inc.

R1-2207068         Evaluation on AI/ML for beam management CEWiT

R1-2207226         Evaluation on AI/ML for beam management Qualcomm Incorporated

R1-2207330         Evaluation on AI/ML for beam management Apple

R1-2207403         Discussion on evaluation on AI/ML for beam management      NTT DOCOMO, INC.

 

R1-2207774        Feature lead summary #1 evaluation of AI/ML for beam management               Moderator (Samsung)

From Monday session

Agreement

·        The following updated based on the agreements in RAN 1 #109-e is adopted

Parameters

Values

UE distribution

 

  • FFS 10 UEs per sector/cell for system performance related KPI (if supported) [e.g,, throughput] for full buffer traffic (if supported) evaluation (model inference).
  • X UEs per sector/cell for system performance related KPI for FTP traffic (if supported) evaluation (model inference).
  •  

o   Other values are not precluded

  • Number of UEs per/sector per cell during data collection (training/testing) is reported by companies if relevant
  • More UEs per sector/cell for data generation is not precluded. 

 

UE Antenna Configuration

·        Antenna setup and port layouts at UE: [1,2,1,4,2,1,1], 2 panels (left, right)

·        [Panel structure: (M,N,P) = (1,4,2)]

o   panels (left, right) with (Mg, Ng) = (1, 2) as baseline

·        Other assumptions are not precluded

 

Companies to explain TXRU weights mapping.

Companies to explain beam and panel selection.

Companies to explain number of UE beams

 

R1-2207775        Feature lead summary #2 evaluation of AI/ML for beam management               Moderator (Samsung)

From Wed session

Agreement

The following updated based on the agreements in RAN 1 #109-e is adopted:

 

Parameters

Values

UE Speed

·         For spatial domain beam prediction, 3km/h

·         For time domain beam prediction: 3km/h(optional), 30km/h (baseline), 60km/h (optional), 90km/h (optional), 120km/h (optional)

·         Other values are not precluded

UE distribution

·        For spatial domain beam prediction: 

o   Option 1: 80% indoor ,20% outdoor as in TR 38.901

o   Option 2: 100% outdoor

·        For time domain prediction: 100% outdoor

 

 

R1-2207776        Feature lead summary #3 evaluation of AI/ML for beam management               Moderator (Samsung)

 

R1-2208104         Feature lead summary #3 evaluation of AI/ML for beam management   Moderator (Samsung)

R1-2208105        Feature lead summary #4 evaluation of AI/ML for beam management               Moderator (Samsung)

Agreement

 

Agreement

 

Agreement

 

Final summary in R1-2208106.

9.2.3.2       Other aspects on AI/ML for beam management

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2205754         Continued discussion on other aspects of AI/ML for beam management               FUTUREWEI

R1-2205893         Discussion on AI/ML for beam management Huawei, HiSilicon

R1-2205968         Discussions on Sub-Use Cases in AI/ML for Beam Management           TCL Communication

R1-2206035         Other aspects on AI/ML for beam management          vivo

R1-2206071         Discussion on other aspects for AI beam management              ZTE

R1-2206115         Considerations on AI/ML for beam management        Sony

R1-2206167         Sub use cases and specification impact on AI/ML for beam management               Fujitsu

R1-2206182         Discussion for other aspects on AI/ML for beam management InterDigital, Inc.

R1-2206198         On Enhancement of AI/ML based Beam Management              Google

R1-2206251         Other aspects on AI/ML for beam management          Rakuten Mobile, Inc

R1-2206318         Other aspects of AI/ML for beam management           OPPO

R1-2206332         Beam management with AI/ML in high-speed railway scenarios            BJTU

R1-2206394         Other aspects on AI/ML for beam management          CATT

R1-2206472         Discussion on AI/ML for beam management NEC

R1-2206513         Further aspects of AI/ML for beam management        Lenovo

R1-2206523         AI and ML for beam management  NVIDIA

R1-2206581         Use-cases and specification for beam management     Intel Corporation

R1-2206606         Discussion on other aspects on AIML for beam management   Spreadtrum Communications

R1-2206638         Discussion on other aspects on AI/ML for beam management  Xiaomi

R1-2206678         Discussions on AI-ML for Beam management            CAICT

R1-2206823         Representative sub use cases for beam management   Samsung

R1-2206877         Other aspects on AI/ML for beam management          LG Electronics

R1-2206905         Discussion on other aspects on AI/ML for beam management  CMCC

R1-2206940         Discussion on AI/ML for beam management Ericsson

R1-2206971         Other aspects on ML for beam management Nokia, Nokia Shanghai Bell

R1-2206991         Other aspects on AI/ML for beam management          MediaTek Inc.

R1-2207227         Other aspects on AI/ML for beam management          Qualcomm Incorporated

R1-2207331         Other aspects on AI/ML for beam management          Apple

R1-2207404         Discussion on other aspects on AI/ML for beam management  NTT DOCOMO, INC.

R1-2207506         Discussion on sub use cases of AI/ML beam management        Panasonic

R1-2207551         Discussion on Performance Related Aspects of Codebook Enhancement with AI/ML               Charter Communications, Inc

R1-2207590         Discussion on other aspects on AI/ML for beam management  KT Corp.

 

R1-2207871         Summary#1 for other aspects on AI/ML for beam management              Moderator (OPPO)

R1-2207872        Summary#2 for other aspects on AI/ML for beam management       Moderator (OPPO)

Agreement

For the sub use case BM-Case1, support the following alternatives for further study:

·        Alt.1: Set A and Set B are different (Set B is NOT a subset of Set A)

·        Alt.2: Set B is a subset of Set A

·        Note1: Set A is for DL beam prediction and Set B is for DL beam measurement.

·        Note2: The beam patterns of Set A and Set B can be clarified by the companies.

Agreement

For the data collection for AI/ML model training (if supported), study the following aspects as a starting point for potential necessary specification impact:

·        Signaling/configuration/measurement/report for data collection, e.g., signaling aspects related to assistance information (if supported), Reference signals

·        Content/type of the collected data

·        Other aspect(s) is not precluded

Agreement

At least for the sub use case BM-Case1 and BM-Case2, support both Alt.1 and Alt.2 for the study of AI/ML model training:

·        Alt.1: AI/ML model training at NW side;

·        Alt.2: AI/ML model training at UE side.

Note: Whether it is online or offline training is a separate discussion.

 

Agreement

For the sub use case BM-Case1 and BM-Case2, further study the following alternatives for the predicted beams:

·        Alt.1: DL Tx beam prediction

·        Alt.2: DL Rx beam prediction

·        Alt.3: Beam pair prediction (a beam pair consists of a DL Tx beam and a corresponding DL Rx beam)

·        Note1: DL Rx beam prediction may or may not have spec impact

 

R1-2207873        Summary#3 for other aspects on AI/ML for beam management       Moderator (OPPO)

Agreement

For the sub use case BM-Case2, further study the following alternatives:

·        Alt.1: Set A and Set B are different (Set B is NOT a subset of Set A)

·        Alt.2: Set B is a subset of Set A (Set A and Set B are not the same)

·        Alt.3: Set A and Set B are the same

·        Note1: The beam pattern of Set A and Set B can be clarified by the companies.

Agreement

Regarding the model monitoring for BM-Case1 and BM-Case2, to investigate specification impacts from the following aspects

·        Performance metric(s)

·        Benchmark/reference for the performance comparison

·        Signaling/configuration/measurement/report for model monitoring, e.g., signaling aspects related to assistance information (if supported), Reference signals

·        Other aspect(s) is not precluded

 

R1-2207874        Summary#4 for other aspects on AI/ML for beam management       Moderator (OPPO)

Agreement

In order to facilitate the AI/ML model inference, study the following aspects as a starting point:

·        Enhanced or new configurations/UE reporting/UE measurement, e.g., Enhanced or new beam measurement and/or beam reporting

·        Enhanced or new signaling for measurement configuration/triggering

·        Signaling of assistance information (if applicable)

·        Other aspect(s) is not precluded

Agreement

Regarding the sub use case BM-Case1 and BM-Case2, study the following alternatives for AI/ML output:

·        Alt.1: Tx and/or Rx Beam ID(s) and/or the predicted L1-RSRP of the N predicted DL Tx and/or Rx beams

o   E.g., N predicted beams can be the top-N predicted beams

·        Alt.2: Tx and/or Rx Beam ID(s) of the N predicted DL Tx and/or Rx beams and  other information

o   FFS: other information (e.g., probability for the beam to be the best beam, the associated confidence, beam application time/dwelling time, Predicted Beam failure)

o   E.g., N predicted beams can be the top-N predicted beams

·        Alt.3: Tx and/or Rx Beam angle(s) and/or the predicted L1-RSRP of the N predicted DL Tx and/or Rx beams

o   E.g., N predicted beams can be the top-N predicted beams

o   FFS: details of Beam angle(s)

·        FFS: how to select the N DL Tx and/or Rx beams (e.g., L1-RSRP higher than a threshold, a sum probability of being the best beams higher than a threshold, RSRP corresponding to the expected Tx and/or Rx beam direction(s))

·        Note1: It is up to companies to provide other alternative(s)

·        Note2: Beam ID is only used for discussion purpose

·        Note3: All the outputs are “nominal” and only for discussion purpose

·        Note4: Values of N is up to each company.

·        Note5: All of the outputs in the above alternatives may vary based on whether the AI/ML model inference is at UE side or gNB side.

·        Note 6: The Top-N beam IDs might have been derived via post-processing of the ML-model output

9.2.4        AI/ML for positioning accuracy enhancement

9.2.4.1       Evaluation on AI/ML for positioning accuracy enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2205894         Evaluation on AI/ML for positioning accuracy enhancement    Huawei, HiSilicon

R1-2205915         Evaluation on AI/ML for positioning accuracy enhancement    PML

R1-2206036         Evaluation on AI/ML for positioning accuracy enhancement    vivo

R1-2206072         Evaluation on AI for positioning enhancement            ZTE

R1-2206168         Preliminary evaluation results and discussions of AI positioning accuracy enhancement       Fujitsu

R1-2206199         On Evaluation of AI/ML based Positioning  Google

R1-2206224         Evaluation method on AI/ML for positioning accuracy enhancement     PML

R1-2206248         Evaluation of AI/ML for Positioning Accuracy Enhancement  Ericsson

R1-2206252         Evaluation on AI/ML for positioning accuracy enhancement    Rakuten Mobile, Inc

R1-2206319         Evaluation methodology and preliminary results on AI/ML for positioning accuracy enhancement       OPPO

R1-2206395         Evaluation on AI/ML for positioning             CATT

R1-2206514         Discussion on AI/ML Positioning Evaluations            Lenovo

R1-2206524         Evaluation of AI and ML for positioning enhancement             NVIDIA

R1-2206639         Evaluation on AI/ML for positioning accuracy enhancement    Xiaomi

R1-2206679         Some discussions on evaluation on AI-ML for positioning accuracy enhancement               CAICT

R1-2206689         Evaluation on AI/ML for positioning accuracy enhancement    China Telecom

R1-2206824         Evaluation on AI ML for Positioning            Samsung

R1-2206878         Evaluation on AI/ML for positioning accuracy enhancement    LG Electronics

R1-2206906         Discussion on evaluation on AI/ML for positioning accuracy enhancement               CMCC

R1-2206972         Evaluation of ML for positioning accuracy enhancement          Nokia, Nokia Shanghai Bell

R1-2207094         Evaluation on AI/ML for positioning accuracy enhancement    InterDigital, Inc.

R1-2207123         Evaluation on AI/ML for positioning accuracy enhancement    Fraunhofer IIS, Fraunhofer HHI

R1-2207228         Evaluation on AI/ML for positioning accuracy enhancement    Qualcomm Incorporated

 

R1-2207862        Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Monday session

Agreement

For AI/ML-based positioning, both approaches below are studied and evaluated by RAN1:

·        Direct AI/ML positioning

·        AI/ML assisted positioning

Agreement

For AI/ML-based positioning, study impact from implementation imperfections.

 

Agreement

For evaluation of AI/ML based positioning, the model complexity is reported via the metric of “number of model parameters”.

 

 

R1-2207863        Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Wed session

Agreement

To investigate the model generalization capability, at least the following aspect(s) are considered for the evaluation for AI/ML based positioning:

Note: It’s up to participating companies to decide whether to evaluate one aspect at a time, or evaluate multiple aspects at the same time.

 

Agreement

When providing evaluation results for AI/ML based positioning, participating companies are expected to describe data labelling details, including:

·        Meaning of the label (e.g., UE coordinates; binary identifier of LOS/NLOS; ToA)

·        Percentage of training data without label, if incomplete labeling is considered in the evaluation

·        Imperfection of the ground truth labels, if any

Agreement

For evaluation of AI/ML based positioning, study the performance impact from availability of the ground truth labels (i.e., some training data may not have ground truth labels). The learning algorithm (e.g., supervised learning, semi-supervised learning, unsupervised learning) is reported by participating companies.

 

 

R1-2207864        Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

Agreement

For AI/ML-based positioning, for evaluation of the potential performance benefits of model finetuning, report at least the following:

·        training dataset setting (e.g., training dataset size necessary for performing model finetuning)

·        horizontal positioning accuracy (in meters) before and after model finetuning.

Agreement

For both direct AI/ML positioning and AI/ML assisted positioning, the following table is adopted for reporting the evaluation results.

Table X. Evaluation results for AI/ML model deployed on [UE or network]-side, [with or without] model generalization, [short model description]

Model input

Model output

Label

Clutter param

Dataset size

AI/ML complexity

Horizontal positioning accuracy at CDF=90% (meters)

Training

test

Model complexity

Computational complexity

AI/ML

 

 

 

 

 

 

 

 

 

 

To report the following in table caption:

·        Which side the model is deployed

·        Model generalization investigation, if applied

·        Short model description: e.g., CNN

Further info for the columns:

·        Model input: input type and size

·        Model output: output type and size

·        Label: meaning of ground truth label; percentage of training data set without label if data labeling issue is investigated (default = 0%)

·        Clutter parameter: e.g., {60%, 6m, 2m}

·        Dataset size, both the size of training/validation dataset and the size of test dataset

·        AI/ML complexity: both model complexity in terms of “number of model parameters”, and computational complexity in terms of FLOPs

·        Horizontal positioning accuracy: the accuracy (in meters) of the AI/ML based method

Note: To report other simulation assumptions, if any.

 

 

R1-2208160        Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

Agreement

For evaluation of AI/ML assisted positioning, an intermediate performance metric of model output is reported.

·        FFS: Detailed definition of the intermediate performance metric of the model output

Agreement

To investigate the model generalization capability, the following aspect is also considered for the evaluation of AI/ML based positioning:

·        UE/gNB RX and TX timing error.

o   The baseline non-AI/ML method may enable the Rel-17 enhancement features (e.g., UE Rx TEG, UE RxTx TEG).

 

Final summary in R1-2208161.

9.2.4.22       Other aspects on AI/ML for positioning accuracy enhancement

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2205895         Discussion on AI/ML for positioning accuracy enhancement   Huawei, HiSilicon

R1-2206037         Other aspects on AI/ML for positioning accuracy enhancement              vivo

R1-2206073         Discussion on other aspects for AI positioning enhancement    ZTE

R1-2206116         Considerations on AI/ML for positioning accuracy enhancement            Sony

R1-2206169         Discussions on sub use cases and spec impacts for AIML for positioning accuracy enhancement       Fujitsu

R1-2206200         On Enhancement of AI/ML based Positioning             Google

R1-2206249         Other Aspects of AI/ML Based Positioning Enhancement        Ericsson

R1-2206253         Other aspects on AI/ML based positioning   Rakuten Mobile, Inc

R1-2206320         On sub use cases and other aspects of AI/ML for positioning accuracy enhancement               OPPO

R1-2206396         Other aspects on AI/ML for positioning        CATT

R1-2206477         Discussion on AI/ML for positioning accuracy enhancement   NEC

R1-2206515         AI/ML Positioning use cases and Associated Impacts Lenovo

R1-2206525         AI and ML for positioning enhancement       NVIDIA

R1-2206607         Discussion on other aspects on AIML for positioning accuracy enhancement               Spreadtrum Communications

R1-2206640         Views on the other aspects of AI/ML-based positioning accuracy enhancement               Xiaomi

R1-2206680         Discussions on AI-ML for positioning accuracy enhancement CAICT

R1-2206825         Representative sub use cases for Positioning Samsung

R1-2206879         Other aspects on AI/ML for positioning accuracy enhancement              LG Electronics

R1-2206907         Discussion on other aspects on AI/ML for positioning accuracy enhancement               CMCC

R1-2206973         Other aspects on ML for positioning accuracy enhancement     Nokia, Nokia Shanghai Bell

R1-2207093         Designs and potential specification impacts of AIML for positioning     InterDigital, Inc.

R1-2207122         On potential specification impact of AI/ML for positioning      Fraunhofer IIS, Fraunhofer HHI

R1-2207229         Other aspects on AI/ML for positioning accuracy enhancement              Qualcomm Incorporated

R1-2207333         Other aspects on AI/ML for positioning accuracy enhancement              Apple

 

R1-2207754         FL summary #1 of other aspects on AI/ML for positioning accuracy enhancement               Moderator (vivo)

R1-2207880        FL summary #2 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

From Wed session

Agreement

For characterization and performance evaluations of AI/ML based positioning accuracy enhancement, the following two AI/ML based positioning methods are selected.

 

Conclusion

Defer the discussion of prioritization of AI/ML positioning based on collaboration level until more progress on collaboration level discussion in agenda 9.2.1.

 

Agreement

Regarding data collection for AI/ML model training, to study and provide inputs on potential specification impact at least for the following aspects of AI/ML based positioning accuracy enhancement

 

Agreement

Regarding AI/ML model monitoring and update, to study and provide inputs on potential specification impact at least for the following aspects of AI/ML based positioning accuracy enhancement

 

 

R1-2208049        FL summary #3 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

Agreement

Study aspects in terms of potential benefit(s) and requirement(s)/specification impact(s) of AI/ML model training and inference in AI/ML for positioning accuracy enhancement considering at least

·        UE-side or Network-side training

·        UE-side or Network-side inference

o   Note: model inference at both UE and network side is not precluded where proponent(s) are encouraged to clarify their AI/ML approaches

Note: companies are encouraged to clarify aspects of their proposed AI/ML approaches for positioning when AI/ML model training and inference are not performed at the same entity

 

Conclusion

To use the following terminology defined in TS 38.305 when describe their proposed positioning methods

·        UE-based

·        UE-assisted/LMF-based

·        NG-RAN node assisted

Note: companies are required to clarify their positioning method(s) when their approaches do not fall in one of the above.


 RAN1#110-bis-e

9.2       Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface

Please refer to RP-221348 for detailed scope of the SI.

 

R1-2210690        Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface)            Ad-hoc Chair (CMCC)

 

R1-2209974         Technical report for Rel-18 SI on AI and ML for NR air interface          Qualcomm Incorporated

9.2.1        General aspects of AI/ML framework

Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.

 

R1-2208365         Continued discussion on common AI/ML characteristics and operations               FUTUREWEI

R1-2208428         Discussion on general aspects of AI/ML framework   Huawei, HiSilicon

R1-2208520         Discussion on general aspects of common AI PHY framework ZTE

R1-2208546         Discussion on general aspects of AIML framework    Spreadtrum Communications

R1-2208633         Discussions on AI/ML framework  vivo

R1-2208739         Discussion on general aspects of AI/ML framework   SEU

R1-2208768         Discussion on general aspects of AI/ML for NR air interface   China Telecom

R1-2208849         On general aspects of AI/ML framework      OPPO

R1-2208877         On General Aspects of AI/ML Framework   Google

R1-2208898         General aspects on AI/ML framework           LG Electronics

R1-2208908         Discussion on general aspects of AI/ML framework   Ericsson

R1-2208966         General aspects of AI/ML framework for NR air interface       CATT

R1-2209010         Discussion on general aspects of AI/ML framework   Fujitsu

R1-2209046         Discussion on general aspects of AI/ML framework   Intel Corporation

R1-2209088         General aspects of AI/ML framework           AT&T

R1-2209094         Considerations on common AI/ML framework           Sony

R1-2209119         General aspects of AI/ML framework           Lenovo

R1-2209145         Discussion on general aspects of AI ML framework   NEC

R1-2209229         Considerations on general aspects on AI-ML framework          CAICT

R1-2209276         Views on the general aspects of AL/ML framework   xiaomi

R1-2209327         Discussion on general aspects of AI/ML framework   CMCC

R1-2209366         Further discussion on the general aspects of ML for Air-interface          Nokia, Nokia Shanghai Bell

R1-2209389         Discussions on Common Aspects of AI/ML Framework           TCL Communication

R1-2209399         Discussion on general aspects of AI/ML framework for NR air interface               ETRI

R1-2209505         General aspects of AI/ML framework           MediaTek Inc.

R1-2209575         General aspect of AI/ML framework             Apple

R1-2209624         General aspects of AI and ML framework for NR air interface NVIDIA

R1-2209639         Discussion on general aspects of AI ML framework   InterDigital, Inc.

R1-2209721         General aspects of AI ML framework and evaluation methodology        Samsung

R1-2209764         Discussion on AI/ML framework    Rakuten Mobile, Inc

R1-2209813         Discussion on general aspects of AI/ML framework   Panasonic

R1-2209865         Discussion on general aspects of AI/ML framework   KDDI Corporation

R1-2209895         Discussion on general aspects of AI/ML framework   NTT DOCOMO, INC.

R1-2209975         General aspects of AI/ML framework           Qualcomm Incorporated

 

[110bis-e-R18-AI/ML-01] – Taesang (Qualcomm)

Email discussion on general aspects of AI/ML by October 19

-        Check points: October 14, October 19

R1-2210396        Summary#1 of General Aspects of AI/ML Framework        Moderator (Qualcomm Incorporated)            (rev of R1-2210375)

From Oct 11th GTW session

Working Assumption

·        Define Level y-z boundary based on whether model delivery is transparent to 3gpp signalling over the air interface or not.

·        Note: Other procedures than model transfer/delivery are decoupled with collaboration level y-z.

·        Clarifying note: Level y includes cases without model delivery.

 

R1-2210472        Summary#2 of General Aspects of AI/ML Framework        Moderator (Qualcomm Incorporated)

From Oct 13th GTW session

Agreement

Clarify Level x/y boundary as:

·        Level x is implementation-based AI/ML operation without any dedicated AI/ML-specific enhancement (e.g., LCM related signalling, RS) collaboration between network and UE.
(Note: The AI/ML operation may rely on future specification not related to AI/ML collaboration. The AI/ML approaches can be used as baseline for performance evaluation for future releases.)

Agreement

Study LCM procedure on the basis that an AI/ML model has a model ID with associated information and/or model functionality at least for some AI/ML operations when network needs to be aware of UE AI/ML models

·        FFS: Detailed discussion of model ID with associated information and/or model functionality.

·        FFS: usage of model ID with associated information and/or model functionality based LCM procedure

·        FFS: whether support of model ID

·        FFS: the detailed applicable AI/ML operations

Agreement

For model selection, activation, deactivation, switching, and fallback at least for UE sided models and two-sided models, study the following mechanisms:

·        Decision by the network

o   Network-initiated

o   UE-initiated, requested to the network

·        Decision by the UE

o   Event-triggered as configured by the network, UE’s decision is reported to network

o   UE-autonomous, UE’s decision is reported to the network

o   UE-autonomous, UE’s decision is not reported to the network

FFS: for network sided models

FFS: other mechanisms

 

 

R1-2210661        Summary#3 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

From Oct 18th GTW session

Conclusion

Data collection may be performed for different purposes in LCM, e.g., model training, model inference, model monitoring, model selection, model update, etc. each may be done with different requirements and potential specification impact.

FFS: Model selection refers to the selection of an AI/ML model among models for the same functionality. (Exact terminology to be discussed/defined)

 

Agreement

Study potential specification impact needed to enable the development of a set of specific models, e.g., scenario-/configuration-specific and site-specific models, as compared to unified models.

Note: User data privacy needs to be preserved. The provision of assistance information may need to consider feasibility of disclosing proprietary information to the other side.

 

Agreement

Study the specification impact to support multiple AI models for the same functionality, at least including the following aspects:

·        Procedure and assistance signaling for the AI model switching and/or selection

FFS: Model selection refers to the selection of an AI/ML model among models for the same functionality. (Exact terminology to be discussed/defined)

 

Agreement

Study AI/ML model monitoring for at least the following purposes: model activation, deactivation, selection, switching, fallback, and update (including re-training).

FFS: Model selection refers to the selection of an AI/ML model among models for the same functionality. (Exact terminology to be discussed/defined)

 

Agreement

Study at least the following metrics/methods for AI/ML model monitoring in lifecycle management per use case:

Note: Model monitoring metric calculation may be done at NW or UE

 

 

From Oct 19th GTW session

Agreement

Study performance monitoring approaches, considering the following model monitoring KPIs as general guidance

·        Accuracy and relevance (i.e., how well does the given monitoring metric/methods reflect the model and system performance)

·        Overhead (e.g., signaling overhead associated with model monitoring)

·        Complexity (e.g., computation and memory cost for model monitoring)

·        Latency (i.e., timeliness of monitoring result, from model failure to action, given the purpose of model monitoring)

·        FFS: Power consumption

·        Other KPIs are not precluded.

Note: Relevant KPIs may vary across different model monitoring approaches.

FFS: Discussion of KPIs for other LCM procedures

 

Agreement

Study various approaches for achieving good performance across different scenarios/configurations/sites, including

·        Model generalization, i.e., using one model that is generalizable to different scenarios/configurations/sites

·        Model switching, i.e., switching among a group of models where each model is for a particular scenario/configuration/site

o   [Models in a group of models may have varying model structures, share a common model structure, or partially share a common sub-structure. Models in a group of models may have different input/output format and/or different pre-/post-processing.]

·        Model update, i.e., using one model whose parameters are flexibly updated as the scenario/configuration/site that the device experiences changes over time. Fine-tuning is one example.

Agreement

The following are additionally considered for the initial list of common KPIs (if applicable) for evaluating performance benefits of AI/ML

 

Conclusion

This RAN1 study considers ML TOP/FLOP/MACs as KPIs for computational complexity for inference. However, there may be a disconnection between actual complexity and the complexity evaluated using these KPIs due to the platform- dependency and implementation (hardware and software) optimization solutions, which are out of the scope of 3GPP.

 

 

Final summary in R1-2210708.

9.2.2        AI/ML for CSI feedback enhancement

9.2.2.1       Evaluation on AI/ML for CSI feedback enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2208366         Continued discussion on evaluation of AI/ML for CSI feedback enhancement               FUTUREWEI

R1-2208429         Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2208521         Evaluation on AI for CSI feedback enhancement        ZTE

R1-2208547         Discussion on evaluation on AIML for CSI feedback enhancement        Spreadtrum Communications, BUPT

R1-2208634         Evaluation on AI/ML for CSI feedback enhancement vivo

R1-2208729         Evaluations on AI-CSI       Ericsson

R1-2208769         Evaluation on AI/ML for CSI feedback enhancement China Telecom

R1-2208850         Evaluation methodology and preliminary results on AI/ML for CSI feedback enhancement       OPPO

R1-2208878         On Evaluation of AI/ML based CSI Google

R1-2208899         Evaluation on AI/ML for CSI feedback enhancement LG Electronics

R1-2208967         Evaluation on AI/ML for CSI feedback enhancement CATT

R1-2209011         Evaluation on AI/ML for CSI feedback enhancement Fujitsu

R1-2209047         Evaluation for CSI feedback enhancements  Intel Corporation

R1-2209120         Evaluation on AI/ML for CSI feedback         Lenovo

R1-2209131         Discussion on evaluation methodology and KPI on AI/ML for CSI feedback enhancement       Panasonic

R1-2209230         Some discussions on evaluation on AI-ML for CSI feedback   CAICT

R1-2209277         Discussion on evaluation on AI/ML for CSI feedback enhancement       xiaomi

R1-2209328         Discussion on evaluation on AI/ML for CSI feedback enhancement       CMCC

R1-2209367         Evaluation of ML for CSI feedback enhancement       Nokia, Nokia Shanghai Bell

R1-2209386         GRU for Historical CSI Prediction Sharp

R1-2209400         Evaluation on AI/ML for CSI feedback enhancement ETRI

R1-2209506         Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.

R1-2209548         Evaluation of AI/ML based methods for CSI feedback enhancement      Fraunhofer IIS, Fraunhofer HHI

R1-2209576         Evaluation on AI/ML for CSI feedback         Apple

R1-2209625         Evaluation of AI and ML for CSI feedback enhancement         NVIDIA

R1-2210272         Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.  (rev of R1-2209640)

R1-2209652         Evaluation on AI/ML for CSI Feedback Enhancement              Mavenir

R1-2209722         Evaluation on AI ML for CSI feedback enhancement Samsung

R1-2209794         Discussion on AI/ML for CSI feedback enhancement AT&T

R1-2209896         Discussion on evaluation on AI/ML for CSI feedback enhancement       NTT DOCOMO, INC.

R1-2209976         Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated

 

[110bis-e-R18-AI/ML-02] – Yuan (Huawei)

Email discussion on evaluation on CSI feedback enhancement by October 19

-        Check points: October 14, October 19

R1-2210365        Summary#1 of [110bis-e-R18-AI/ML-02]  Moderator (Huawei)

From Oct 10th GTW session

Conclusion

For the evaluation of the AI/ML based CSI feedback enhancement, if SLS is adopted, the ‘Traffic model’ in the baseline of EVM is captured as follows:

Traffic model

At least, FTP model 1 with packet size 0.5 Mbytes is assumed

Other options are not precluded.

 

 

R1-2210366         Summary#2 of [110bis-e-R18-AI/ML-02]    Moderator (Huawei)

R1-2210367        Summary#3 of [110bis-e-R18-AI/ML-02]  Moderator (Huawei)

From Oct 13th GTW session

Agreement

In the evaluation of the AI/ML based CSI feedback enhancement, for ‘Channel estimation’, if realistic DL channel estimation is considered, regarding how to calculate the intermediate KPI of CSI accuracy,

·        Use the target CSI from ideal channel and use output CSI from the realistic channel estimation

o   The target CSI from ideal channel equally applies to AI/ML based CSI feedback enhancement, and the baseline codebook

Note: there is no restriction on model training

 

 

R1-2210368         Summary#4 of [110bis-e-R18-AI/ML-02]    Moderator (Huawei)

 

Decision: As per email decision posted on Oct 17th,

Agreement

In the evaluation of the AI/ML based CSI feedback enhancement, for “Baseline for performance evaluation” in the EVM table, Type I Codebook (if it outperforms Type II Codebook) can be optionally considered for comparing AI/ML schemes up to companies

·        Note: Type II Codebook is baseline as agreed

Conclusion

If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, for the outdoor UEs, add O2I car penetration loss per TS 38.901 if the simulation assumes UEs inside vehicles.

 

Conclusion

If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, no explicit trajectory modeling is considered for evaluation

 

Conclusion

If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, and if the AI/ML model outputs multiple predicted instances, the intermediate KPI is calculated for each prediction instance

 

Conclusion

If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, both of the following types of AI/ML model input are considered for evaluations:

·        Raw channel matrixes

·        Eigenvector(s)

Conclusion

If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, for the evaluation of CSI prediction:

·        Companies are encouraged to report the assumptions on the observation window, including number/time distance of historic CSI/channel measurements as the input of the AI/ML model, and

·        Companies to report the assumptions on the prediction window, including number/time distance of predicted CSI/channel as the output of the AI/ML model

 

R1-2210369        Summary#5 of [110bis-e-R18-AI/ML-02]  Moderator (Huawei)

From Oct 18th GTW session

Conclusion

If ideal DL channel estimation is considered (which is optional) for the evaluations of CSI feedback enhancement, there is no consensus on how to use the ideal channel estimation for dataset construction, or performance evaluation/inference.

·        It is up to companies to report whether/how ideal channel is used in the dataset construction as well as performance evaluation/inference.

Conclusion

For the evaluation of Type 2 (Joint training of the two-sided model at network side and UE side, respectively), following procedure is considered as an example:

·        For each FP/BP loop,

o   Step 1: UE side generates the FP results (i.e., CSI feedback) based on the data sample(s), and sends the FP results to NW side

o   Step 2: NW side reconstructs the CSI based on FP results, trains the CSI reconstruction part, and generates the BP information (e.g., gradients), which are then sent to UE side

o   Step 3: UE side trains the CSI generation part based on the BP information from NW side

·        Note: the dataset between UE side and NW side is aligned.

·        Other Type 2 training approaches are not precluded and reported by companies

Conclusion

For the evaluation of an example of Type 3 (Separate training at NW side and UE side), the following procedure is considered for the sequential training starting with NW side training (NW-first training):

·        Step1: NW side trains the NW side CSI generation part (which is not used for inference) and the NW side CSI reconstruction part jointly

·        Step2: After NW side training is finished, NW side shares UE side with a set of information (e.g., dataset) that is used by the UE side to be able to train the UE side CSI generation part

·        Step3: UE side trains the UE side CSI generation part based on the received set of information

·        Other Type 3 NW-first training approaches are not precluded and reported by companies

Conclusion

For the evaluation of an example of Type 3 (Separate training at NW side and UE side), the following procedure is considered for the sequential training starting with UE side training (UE-first training):

·        Step1: UE side trains the UE side CSI generation part and the UE side CSI reconstruction part (which is not used for inference) jointly

·        Step2: After UE side training is finished, UE side shares NW side with a set of information (e.g., dataset) that is used by the NW side to be able to train the CSI reconstruction part

·        Step3: NW side trains the NW side CSI reconstruction part based on the received set of information

·        Other Type 3 UE-first training approaches are not precluded and reported by companies

Working assumption

In the evaluation of the AI/ML based CSI feedback enhancement, if SGCS is adopted as the intermediate KPI for the rank>1 situation, companies to ensure the correct calculation of SGCS and to avoid disorder issue of the output eigenvectors

·        Note: Eventual KPI can still be used to compare the performance

Agreement

For the evaluation of the AI/ML based CSI feedback enhancement, if the SGCS is adopted as the intermediate KPI as part of the ‘Evaluation Metric’ for rank>1 cases, at least Method 3 is adopted, FFS whether additionally adopt a down-selected metric between Method 1 and Method 2.

·        Method 1: Average over all layers

·        Method 2: Weighted average over all layers

where  is the jth eigenvector of the target CSI at resource unit i and K is the rank.  is the  jth output vector of the output CSI of resource unit i. N is the total number of resource units.   denotes the average operation over multiple samples.  is an eigenvalue of the channel covariance matrix corresponding to .

·        Method 3: SGCS is separately calculated for each layer (e.g., for K layers, K SGCS values are derived respectively, and comparison is performed per layer)

Agreement

In CSI compression using two-sided model use case, evaluate and study quantization of CSI feedback, including at least the following aspects:

·        Quantization non-aware training

·        Quantization-aware training

·        Quantization methods including uniform vs non-uniform quantization, scalar versus vector quantization, and associated parameters, e.g., quantization resolution, etc.

·        How to use the quantization methods

 

R1-2210752        Summary#6 of [110bis-e-R18-AI/ML-02]  Moderator (Huawei)

From Oct 19th GTW session

Agreement

For evaluating the performance impact of ground-truth quantization in the CSI compression, study high resolution quantization methods for ground-truth CSI, e.g., including at least the following options

·        High resolution scalar quantization, e.g., Float32, Float16, etc.

o   FFS select one of the scalar quantization resolutions as baseline

·        High resolution codebook quantization, e.g., R16 Type II-like method with new parameters

o   FFS new parameters

·        Other quantization methods are not precluded

Agreement

For the evaluation of the potential performance benefits of model fine-tuning of CSI feedback enhancement which is optionally considered by companies, the following case is taken

·        The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model is updated based on a fine-tuning dataset different than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B. After that, the AI/ML model is tested on a different dataset than Scenario#A/Configuration#A, e.g., subject to Scenario#B/Configuration#B, Scenario#A/Configuration#B

·        Company to report the fine-tuning dataset setting (e.g., size of dataset) and the improvement of performance

Agreement

For the evaluation of an example of Type 3 (Separate training at NW side and UE side), the following cases are considered for evaluations:

·        Case 1 (baseline): Aligned AI/ML model structure between NW side and UE side

·        Case 2: Not aligned AI/ML model structures between NW side and UE side

o   Companies to report the AI/ML structures for the UE part model and the NW part model, e.g., different backbone (e.g., CNN, Transformer, etc.), or same backbone but different structure (e.g., number of layers)

·        FFS different sizes of datasets between NW side and UE side

·        FFS aligned/different quantization/dequantization methods between NW side and UE side

·        FFS: whether/how to evaluate the case where the input/output types and/or pre/post-processing are not aligned between NW part model and UE part model

Agreement

For the evaluation of Type 2 (Joint training of the two-sided model at network side and UE side, respectively), the following evaluation cases are considered for multi-vendors,

·        Case 1 (baseline): Type 2 training between one NW part model to one UE part model

·        Case 2: Type 2 training between one NW part model and M>1 separate UE part models

o   Companies to report the AI/ML structures for the UE part model and the NW part model

o   FFS Companies to report the dataset used at UE part models, e.g., whether the same or different dataset(s) are used among M UE part models

·        Case 3: Type 2 training between one UE part model and N>1 separate NW part models

o   Companies to report the AI/ML structures for the UE part model and the NW part model

o   FFS Companies to report the dataset used at NW part models, e.g., whether the same or different dataset(s) are used among N NW part models

·        FFS N NW part models to M UE part models

·        FFS different quantization/dequantization methods between NW and UE

·        FFS: whether/how to evaluate the case where the input/output types and/or pre/post-processing are not aligned between NW part model and UE part model

·        FFS: companies to report the training order of UE-NW pair(s) in case of M UE part models and/or N NW part models

·        FFS: whether/how to report overhead

Agreement

For the evaluation of the AI/ML based CSI compression sub use cases, at least the following types of AI/ML model input (for CSI generation part)/output (for CSI reconstruction part) are considered for evaluations

·        Raw channel matrix, e.g., channel matrix with the dimensions of Tx, Rx, and frequency unit

o   Companies to report the raw channel is in frequency domain or delay domain

·        Precoding matrix

o   Companies to report the precoding matrix is a group of eigenvector(s) or an eType II-like reporting (i.e., eigenvectors with angular-delay domain representation)

·        Other input/output types are not precluded

·        Companies to report the combination of input (for CSI generation part) and output (for CSI reconstruction part),

o   Note: the input and output may be of different types

Conclusion

If the AI/ML based CSI prediction sub use case is to be selected as a sub use case, for SLS, spatial consistency procedure A with 50m decorrelation distance from 38.901 is used (if not used, company should state this in their simulation assumptions)

·        UE velocity vector is assumed as fixed over time in Procedure A modeling

Agreement

In the evaluation of the AI/ML based CSI feedback enhancement, for the calculation of intermediate KPI, the following is considered as the granularity of the frequency unit for averaging operation

·        For 15kHz SCS: For 10MHz bandwidth: 4 RBs; for 20MHz bandwidth: 8 RBs

·        For 30kHz SCS: For 10MHz bandwidth: 2 RBs; for 20MHz bandwidth: 4 RBs

·        Note: Other frequency unit granularity is not precluded and reported by companies

 

Final summary in R1-2210753.

9.2.2.2       Other aspects on AI/ML for CSI feedback enhancement

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2208367         Continued discussion on other aspects of AI/ML for CSI feedback enhancement               FUTUREWEI

R1-2208430         Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2208522         Discussion on other aspects for AI CSI feedback enhancement ZTE

R1-2208548         Discussion on other aspects on AIML for CSI feedback            Spreadtrum Communications

R1-2208635         Other aspects on AI/ML for CSI feedback enhancement           vivo

R1-2208728         Discussions on AI-CSI      Ericsson

R1-2208770         Discussion on AI/ML for CSI feedback enhancement China Telecom

R1-2208851         On sub use cases and other aspects of AI/ML for CSI feedback enhancement               OPPO

R1-2208879         On Enhancement of AI/ML based CSI           Google

R1-2208900         Other aspects on AI/ML for CSI feedback enhancement           LG Electronics

R1-2208968         Discussion on AI/ML for CSI feedback enhancement CATT

R1-2209012         Views on specification impact for CSI compression with two-sided model               Fujitsu

R1-2209048         Use-cases and specification for CSI feedback              Intel Corporation

R1-2209095         Considerations on CSI measurement enhancements via AI/ML Sony

R1-2209121         Further aspects of AI/ML for CSI feedback  Lenovo

R1-2209161         Discussion on AI/ML for CSI feedback enhancement Panasonic

R1-2209231         Discussions on AI-ML for CSI feedback       CAICT

R1-2209278         Discussion on specification impact for AI/ML based CSI feedback        xiaomi

R1-2209329         Discussion on other aspects on AI/ML for CSI feedback enhancement  CMCC

R1-2209368         Other aspects on ML for CSI feedback enhancement  Nokia, Nokia Shanghai Bell

R1-2209390         Discussions on Sub-Use Cases in AI/ML for CSI Feedback Enhancement            TCL Communication

R1-2209401         Discussion on other aspects on AI/ML for CSI feedback enhancement  ETRI

R1-2209424         Discussion on AI/ML for CSI feedback enhancement NEC

R1-2209507         Other aspects on AI/ML for CSI feedback enhancement           MediaTek Inc.

R1-2209577         Other aspects on AI/ML for CSI      Apple

R1-2209626         AI and ML for CSI feedback enhancement   NVIDIA

R1-2209641         Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2209723         Representative sub use cases for CSI feedback enhancement    Samsung

R1-2209795         Discussion on AI/ML for CSI feedback enhancement AT&T

R1-2209897         Discussion on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.

R1-2209977         Other aspects on AI/ML for CSI feedback enhancement           Qualcomm Incorporated

 

[110bis-e-R18-AI/ML-03] – Huaning (Apple)

Email discussion on other aspects on AI/ML for CSI feedback enhancement by October 19

-        Check points: October 14, October 19

R1-2210319         Summary #1 on other aspects of AI/ML for CSI enhancement Moderator (Apple)

R1-2210320        Summary #2 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From Oct 14th GTW session

Conclusion

Joint CSI prediction and CSI compression is NOT selected as one representative sub-use case for CSI feedback enhancement use case.

 

Conclusion

CSI accuracy enhancement based on traditional codebook design is NOT selected as one representative sub-use case for CSI feedback enhancement use case.

 

Conclusion

Temporal-spatial-frequency domain CSI compression using two-sided model is NOT selected as one representative sub-use case for CSI enhancement use case.

·         Up to each company to report whether past CSI is used as model input for spatial-frequency domain CSI compression

 

 

R1-2210321        Summary #3 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

Presented in Oct 18th GTW session

 

R1-2210611        Summary #4 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From Oct 19th GTW session

Agreement

In CSI compression using two-sided model use case, study potential specification impact for performance monitoring including:

 

Agreement

In CSI compression using two-sided model use case, further study potential specification impact related to assistance signaling and procedure for model performance monitoring.

 

Agreement

In CSI compression using two-sided model use case, further study potential specification impact related to potential co-existence and fallback mechanisms between AI/ML-based CSI feedback mode and legacy non-AI/ML-based CSI feedback mode.

 

Agreement

In CSI compression using two-sided model use case, further study at least the following options for performance monitoring metrics/methods:

·        Intermediate KPIs as monitoring metrics (e.g., SGCS)

·        Eventual KPIs (e.g., Throughput, hypothetical BLER, BLER, NACK/ACK).

·        Legacy CSI based monitoring: schemes using additional legacy CSI reporting

·        Other monitoring solutions, at least including the following option:

o   Input or Output data based monitoring: such as data drift between training dataset and observed dataset and out-of-distribution detection

 

Agreement

In CSI compression using two-sided model use case, further study at least use cases of the following potential specification impact on quantization method alignment between CSI generation part at UE and CSI reconstruction part at gNB:

·         Alignment of the quantization/dequantization method and the feedback message size between Network and UE

9.2.3        AI/ML for beam management

9.2.3.1       Evaluation on AI/ML for beam management

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2208368         Continued discussion on evaluation of AI/ML for beam management               FUTUREWEI

R1-2208431         Evaluation on AI/ML for beam management Huawei, HiSilicon

R1-2208523         Evaluation on AI for beam management       ZTE

R1-2208549         Evaluation on AI for beam management       Spreadtrum Communications

R1-2208636         Evaluation on AI/ML for beam management vivo

R1-2210240         Discussion for evaluation on AI/ML for beam management     InterDigital, Inc.  (rev of R1-2208682)

R1-2208771         Evaluation on AI/ML for beam management China Telecom

R1-2208852         Evaluation methodology and preliminary results on AI/ML for beam management               OPPO

R1-2210327         On Evaluation of AI/ML based Beam Management    Google  (rev of R1-2208880)

R1-2208901         Evaluation on AI/ML for beam management LG Electronics

R1-2208906         Evaluation on AI/ML for beam management Ericsson

R1-2208969         Evaluation on AI/ML for beam management CATT

R1-2209013         Evaluation on AI/ML for beam management Fujitsu

R1-2209049         Evaluations for AI/ML beam management   Intel Corporation

R1-2209122         Evaluation on AI/ML for beam management Lenovo

R1-2209232         Some discussions on evaluation on AI-ML for Beam management         CAICT

R1-2209279         Evaluation on AI/ML for beam management xiaomi

R1-2209330         Discussion on evaluation on AI/ML for beam management      CMCC

R1-2209369         Evaluation of ML for beam management      Nokia, Nokia Shanghai Bell

R1-2209508         Evaluation on AI/ML for beam management MediaTek Inc.

R1-2209578         Evaluation on AI/ML for beam management Apple

R1-2209613         Evaluation of AI/ML based beam management           Rakuten Symphony

R1-2209627         Evaluation of AI and ML for beam management         NVIDIA

R1-2209724         Evaluation on AI ML for Beam management              Samsung

R1-2209898         Discussion on evaluation on AI/ML for beam management      NTT DOCOMO, INC.

R1-2209978         Evaluation on AI/ML for beam management Qualcomm Incorporated

R1-2210107         Evaluation on AI/ML for beam management CEWiT

 

[110bis-e-R18-AI/ML-04] – Feifei (Samsung)

Email discussion on evaluation on AI/ML for beam management by October 19

-        Check points: October 14, October 19

R1-2210359        Feature lead summary #0 evaluation of AI/ML for beam management               Moderator (Samsung)

From Oct 10th GTW session

Working Assumption

The following cases are considered for verifying the generalization performance of an AI/ML model over various scenarios/configurations as a starting point:

·        Case 1: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model performs inference/test on a dataset from the same Scenario#A/Configuration#A

·        Case 2: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model performs inference/test on a different dataset than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B

·        Case 3: The AI/ML model is trained based on training dataset constructed by mixing datasets from multiple scenarios/configurations including Scenario#A/Configuration#A and a different dataset than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B, and then the AI/ML model performs inference/test on a dataset from a single Scenario/Configuration from the multiple scenarios/configurations, e.g.,  Scenario#A/Configuration#A, Scenario#B/Configuration#B, Scenario#A/Configuration#B.

o   Note: Companies to report the ratio for dataset mixing

o   Note: number of the multiple scenarios/configurations can be larger than two

·        FFS the detailed set of scenarios/configurations

·        FFS other cases for generalization verification, e.g.,

o   Case 2A: The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model is updated based on a fine-tuning dataset different than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B. After that, the AI/ML model is tested on a different dataset than Scenario#A/Configuration#A, e.g., subject to Scenario#B/Configuration#B, Scenario#A/Configuration#B.

Conclusion

For system performance related KPI (if supported) evaluation (model inference), companies report either of the following traffic model:

·        Option 1: Full buffer

·        Option 2: FTP model with detail assumptions (e.g., FTP model 1, FTP model 3)

Agreement

·        BS antenna configuration:

o   antenna setup and port layouts at gNB: (4, 8, 2, 1, 1, 1, 1), (dV, dH) = (0.5, 0.5) λ

o   Other assumptions are not precluded

·        BS Tx power for evaluation:

o   40dBm (baseline)

o   Other values (e.g. 34 dBm) are not precluded and can be reported by companies

·        UE antenna configuration (Clarification of agreement in RAN 1 #110):

o   antenna setup and port layouts at UE: (1, 4, 2, 1, 2, 1, 1), 2 panels (left, right)

o   Other assumptions are not precluded

Agreement

·        For the evaluation of both BM-Case1 and BM-Case2, 32 or 64 downlink Tx beams (maximum number of available beams) at NW side.

o   Other values, e.g., 256, etc, are not precluded and can be reported by companies.

·        For the evaluation of both BM-Case1 and BM-Case2, 4 or 8 downlink Rx beams (maximum number of available beams) per UE panel at UE side.

o   Other values, e.g., 16, etc, are not precluded and can be reported by companies.

 

R1-2210360        Feature lead summary #1 evaluation of AI/ML for beam management               Moderator (Samsung)

From Oct 14th GTW session

Agreement

The options to evaluate beam prediction accuracy (%):

·        Top-1 (%): the percentage of “the Top-1 genie-aided beam is Top-1 predicted beam”

·        Top-K/1 (%): the percentage of “the Top-1 genie-aided beam is one of the Top-K predicted beams”

·        Top-1/K (%) (Optional): the percentage of “the Top-1 predicted beam is one of the Top-K genie-aided beams”

·        Where K >1 and values can be reported by companies.

Agreement

For DL Tx beam prediction, the definition of Top-1 genie-aided Tx beam considers the following options

·        Option A, the Top-1 genie-aided Tx beam is the Tx beam that results in the largest L1-RSRP over all Tx and Rx beams

·        Option B, the Top-1 genie-aided Tx beam is the Tx beam that results in the largest L1-RSRP over all Tx beams with specific Rx beam(s)

o   FFS on specific Rx beam(s)

o   Note: specific Rx beams are subset of all Rx beams

 

R1-2210361        Feature lead summary #2 evaluation of AI/ML for beam management               Moderator (Samsung)

From Oct 18th GTW session

Agreement

For DL Tx-Rx beam pair prediction, the definition of Top-1 genie-aided Tx-Rx beam pair considers the following options:

·        Option A: The Tx-Rx beam pair that results in the largest L1-RSRP over all Tx and Rx beams

·        Option B: The Tx-Rx beam pair that results in the largest L1-RSRP over all Tx over all Tx beams with specific Rx beam(s)

o   FFS on specific Rx beam(s)

o   Note: specific Rx beams are subset of all Rx beams

 

R1-2210362        Feature lead summary #3 evaluation of AI/ML for beam management               Moderator (Samsung)

From Oct 19th GTW session

Agreement

·        Companies to report the selected scenarios/configurations for generalization verification

·        Note: other approaches for achieving good generalization performance for AI/ML-based schemes are not precluded.

Working Assumption

For both BM-Case1 and BM-Case 2, the following table is adopted as working assumption for reporting the evaluation results.

 

Table X. Evaluation results for [BM-Case1 or BM-Case2] without model generalization for [DL Tx beam prediction or Tx-Rx beam pair prediction or Rx beam prediction]

 

Company A

……

Assumptions

Number of [beams/beam pairs] in Set A

 

 

Number of [beams/beam pairs] in Set B

 

 

Baseline scheme

 

 

AI/ML model

input/output

Model input

 

 

Model output

 

 

Data Size

Training

 

 

Testing

 

 

AI/ML model

[Short model description]

 

 

Model complexity

 

 

Computational complexity

 

 

Evaluation results

[With AI/ML / baseline]

[Beam prediction accuracy (%)]

[KPI A]

 

 

[KPI B]

 

 

[L1-RSRP Diff]

[Average L1-RSRP diff]

 

 

[System performance]

[RS overhead Reduction (%)/

RS overhead]

 

 

[UCI report]

 

 

[UPT]

 

 

 

To report the following in table caption:

·        Which side the model is deployed

Further info for the columns:

·        Assumptions

o   Number of beams/beam pairs in Set A

o   Number of beams/beam pairs in Set B

o   Baseline scheme, e.g., Option 1 (exhaustive beam sweeping), Option 2(based on measurements of Set B), or baseline described by companies

o   Other assumptions can be added later based on agreements

·        Model input: input type(s)

·        Model output: output type(s), e.g., the best DL Tx and/or Rx beam ID, and/or L1-RSRPs of N beams(pairs)

·        Dataset size, both the size of training/validation dataset and the size of test dataset

·        Short model description: e.g., CNN, LSTM

·        Model complexity, in terms of “number of model parameters” and/or size (e.g. Mbyte)”, and

·        Computational complexity in terms of FLOPs

·        Evaluation results: agreed KPIs, with AI/ML / with baseline scheme (if applicable)

·        Note: To report other simulation assumptions, if any.

Agreement

·        Study the following options on the selection of Set B of beams (pairs)

 

Working assumption

 

Agreement

·        At least for BM-Case 2, consider the following assumptions for evaluation

o   Periodicity of time instance for each measurement/report in T1:

§  20ms, 40ms, 80ms, [100ms], 160ms, [960ms]

§  Other values can be reported by companies.

o   Number of time instances for measurement/report in T1 can be reported by companies.

o   Time instance(s) for prediction can be reported by companies.

9.2.3.2       Other aspects on AI/ML for beam management

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2208369         Continued discussion on other aspects of AI/ML for beam management               FUTUREWEI

R1-2208432         Discussion on AI/ML for beam management Huawei, HiSilicon

R1-2208524         Discussion on other aspects for AI beam management              ZTE

R1-2208550         Discussion on other aspects on AIML for beam management   Spreadtrum Communications

R1-2208637         Other aspects on AI/ML for beam management          vivo

R1-2208683         Discussion for other aspects on AI/ML for beam management InterDigital, Inc.

R1-2208853         Other aspects of AI/ML for beam management           OPPO

R1-2208881         On Enhancement of AI/ML based Beam Management              Google

R1-2208902         Other aspects on AI/ML for beam management          LG Electronics

R1-2208907         Discussion on AI/ML for beam management Ericsson

R1-2208970         Discussion on AI/ML for beam management CATT

R1-2209014         Sub use cases and specification impact on AI/ML for beam management               Fujitsu

R1-2209050         Use-cases and Specification Impact for AI/ML beam management         Intel Corporation

R1-2209096         Consideration on AI/ML for beam management          Sony

R1-2209123         Further aspects of AI/ML for beam management        Lenovo

R1-2209146         Discussion on AI/ML for beam management NEC

R1-2209233         Discussions on AI-ML for Beam management            CAICT

R1-2209280         Discussion on other aspects on AI/ML for beam management  xiaomi

R1-2209331         Discussion on other aspects on AI/ML for beam management  CMCC

R1-2209370         Other aspects on ML for beam management Nokia, Nokia Shanghai Bell

R1-2209391         Discussions on Sub-Use Cases in AI/ML for Beam Management           TCL Communication

R1-2209402         Discussion on other aspects on AI/ML for beam management  ETRI

R1-2209509         Other aspects on AI/ML for beam management          MediaTek Inc.

R1-2209579         Other aspects on AI/ML for beam management          Apple

R1-2209614         Discussion on AI/ML for beam management Rakuten Symphony

R1-2209628         AI and ML for beam management  NVIDIA

R1-2209725         Representative sub use cases for beam management   Samsung

R1-2209899         Discussion on AI/ML for beam management NTT DOCOMO, INC.

R1-2209979         Other aspects on AI/ML for beam management          Qualcomm Incorporated

R1-2210085         Discussion on sub use cases of AI/ML beam management        Panasonic

R1-2210086         Discussion on other aspects on AI/ML for beam management  KT Corp.

 

[110bis-e-R18-AI/ML-05] – Zhihua (OPPO)

Email discussion on other aspects of AI/ML for beam management by October 19

-        Check points: October 14, October 19

R1-2210353        Summary#1 for other aspects on AI/ML for beam management       Moderator (OPPO)

R1-2210354        Summary#2 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Oct 14th GTW session

Conclusion

For AI/ML based beam management, RAN1 has no consensus to support on studying any other sub use case in addition to BM-Case1 and BM-Case2.

Note: this conclusion is independent of the discussion on the alternatives of AI/ML model inputs for BM-Case1 and BM-Case2.

 

Conclusion

For the sub use case BM-Case1 and BM-Case2, Set B is a set of beams whose measurements are taken as inputs of the AI/ML model,

 

 

R1-2210355         Summary#3 for other aspects on AI/ML for beam management              Moderator (OPPO)

R1-2210356        Summary#4 for other aspects on AI/ML for beam management       Moderator (OPPO)

Presented in Oct 18th GTW session

 

 

R1-2210357        Summary#5 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Oct 19th GTW session

Agreement

For BM-Case1 with a UE-side AI/ML model, study the potential specification impact of L1 signaling to report the following information of AI/ML model inference to NW

·        The beam(s) that is based on the output of AI/ML model inference

·        FFS: Predicted L1-RSRP corresponding to the beam(s)

·        FFS: other information

Agreement

For BM-Case2 with a UE-side AI/ML model, study the potential specification impact   of L1 signaling to report the following information of AI/ML model inference to NW

·        The beam(s) of N future time instance(s) that is based on the output of AI/ML model inference

o   FFS: value of N

·        FFS: Predicted L1-RSRP corresponding to the beam(s)

·        Information about the timestamp corresponding the reported beam(s)

o   FFS: explicit or implicit

·        FFS: other information

Agreement

For BM-Case1 and BM-Case2 with a UE-side AI/ML model, study the following alternatives for model monitoring with potential down-selection:

·        Atl1. UE-side Model monitoring

o   UE monitors the performance metric(s)

o   UE makes decision(s) of model selection/activation/ deactivation/switching/fallback operation

·        Atl2. NW-side Model monitoring

o   NW monitors the performance metric(s)

o   NW makes decision(s) of model selection/activation/ deactivation/switching/ fallback operation

·        Alt3. Hybrid model monitoring

o   UE monitors the performance metric(s)

o   NW makes decision(s) of model selection/activation/ deactivation/switching/ fallback operation

 

Decision: As per email decision posted on Oct 19th,

Working Assumption

For BM-Case1 and BM-Case2 with a network-side AI/ML model, study the following L1 beam reporting enhancement for AI/ML model inference

·        UE to report the measurement results of more than 4 beams in one reporting instance

·        Other L1 reporting enhancements can be considered

Agreement

For BM-Case1 and BM-Case2 with a network-side AI/ML model, study the NW-side model monitoring:

·        NW monitors the performance metric(s) and makes decision(s) of model selection/activation/ deactivation/switching/ fallback operation

 

Agreement

Regarding NW-side model monitoring for a network-side AI/ML model of BM-Case1 and BM-Case2, study the potential specification impacts from the following aspects

·        Beam measurement and report for model monitoring

·        Note: This may or may not have specification impact.

 

Final summary in R1-2210764.

9.2.4        AI/ML for positioning accuracy enhancement

9.2.4.1       Evaluation on AI/ML for positioning accuracy enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2208399         Evaluation of AI/ML for Positioning Accuracy Enhancement  Ericsson

R1-2208433         Evaluation on AI/ML for positioning accuracy enhancement    Huawei, HiSilicon

R1-2208525         Evaluation on AI for positioning enhancement            ZTE

R1-2208638         Evaluation on AI/ML for positioning accuracy enhancement    vivo

R1-2208772         Evaluation on AI/ML for positioning accuracy enhancement    China Telecom

R1-2208854         Evaluation methodology and preliminary results on AI/ML for positioning accuracy enhancement       OPPO

R1-2208882         On Evaluation of AI/ML based Positioning  Google

R1-2208903         Evaluation on AI/ML for positioning accuracy enhancement    LG Electronics

R1-2208971         Evaluation on AI/ML for positioning enhancement    CATT

R1-2209015         Discussions on evaluation of AI positioning accuracy enhancement       Fujitsu

R1-2209124         Discussion on AI/ML Positioning Evaluations            Lenovo

R1-2209234         Some discussions on evaluation on AI-ML for positioning accuracy enhancement               CAICT

R1-2209281         Evaluation on AI/ML for positioning accuracy enhancement    xiaomi

R1-2209332         Discussion on evaluation on AI/ML for positioning accuracy enhancement               CMCC

R1-2209371         Evaluation of ML for positioning accuracy enhancement          Nokia, Nokia Shanghai Bell

R1-2209484         Evaluation on AI/ML for positioning accuracy enhancement    InterDigital, Inc.

R1-2209510         Evaluation on AI/ML for positioning accuracy enhancement    MediaTek Inc.

R1-2209537         Evaluation on AI/ML for positioning accuracy enhancement    Faunhofer IIS, Fraunhofer HHI

R1-2209580         Evaluation on AI/ML for positioning accuracy enhancement    Apple

R1-2209615         Evaluation of AI/ML based positioning accuracy enhancement Rakuten Symphony

R1-2209629         Evaluation of AI and ML for positioning enhancement             NVIDIA

R1-2209726         Evaluation on AI ML for Positioning            Samsung

R1-2209980         Evaluation on AI/ML for positioning accuracy enhancement    Qualcomm Incorporated

 

[110bis-e-R18-AI/ML-06] – Yufei (Ericsson)

Email discussion on evaluation on AI/ML for positioning accuracy enhancement by October 19

-        Check points: October 14, October 19

R1-2210385         Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

R1-2210386        Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Oct 14th GTW session

Agreement

To investigate the model generalization capability, the following aspect is also considered for the evaluation of AI/ML based positioning:

·        InF scenarios, e.g., training dataset from one InF scenario (e.g., InF-DH), test dataset from a different InF scenario (e.g., InF-HH)

Agreement

For both direct AI/ML positioning and AI/ML assisted positioning, if fine-tuning is not evaluated, the template agreed in RAN1#110 is updated to the following for reporting the evaluation results.

Table X. Evaluation results for AI/ML model deployed on [UE or network]-side, [short model description]

Model input

Model output

Label

Settings (e.g., drops, clutter param, mix)

Dataset size

AI/ML complexity

Horizontal pos. accuracy at CDF=90% (m)

Train

Test

Train

test

Model complexity

Computation complexity

AI/ML

 

 

 

 

 

 

 

 

 

 

 

Agreement

For both direct AI/ML positioning and AI/ML assisted positioning, if fine-tuning is evaluated, the template agreed in RAN1#110 is updated to the following for reporting the evaluation results.

Table X. Evaluation results for AI/ML model deployed on [UE or network]-side, [short model description]

Model input

Model output

Label

Settings (e.g., drops, clutter param, mix)

Dataset size

AI/ML complexity

Horizontal pos. accuracy at CDF=90% (m)

Train

Fine-tune

Test

Train

Fine-tune

test

Model complexity

Computation complexity

AI/ML

 

 

 

 

 

 

 

 

 

 

 

 

 

Agreement

For AI/ML-assisted positioning, companies report which construction is applied in their evaluation:

·        Single-TRP construction: the input of the ML model is the channel measurement between the target UE and a single TRP, and the output of the ML model is for the same pair of UE and TRP.

·        Multi-TRP construction: the input of the ML model contains N sets of channel measurements between the target UE and N (N>1) TRPs, and the output of the ML model contains N sets of values, one for each of the N TRPs.

Note: For a measurement (e.g., RSTD) which is a relative value between a given TRP and a reference TRP, the TRP in “single-TRP” and “multi-TRP” refers to the given TRP only.

Note: For single-TRP construction, companies report whether they consider same model for all TRPs or N different models for TRPs

 

Conclusion

For evaluation of AI/ML based positioning, suspend the discussion on intra-site (or zone-specific) variations until concepts and channel model construction not in TR38.901 (e.g., “intra-site” or “zone”) are clarified under AI 9.2.1.

Note: An individual company can still submit evaluation results for intra-site variation.

 

Conclusion

For evaluation of AI/ML based positioning, the sampling period is selected by proponent companies. Each company report the sampling period used in their evaluation.

 

Agreement

For evaluation of AI/ML assisted positioning, the following intermediate performance metrics are used:

·        LOS classification accuracy, if the model output includes LOS/NLOS indicator of hard values, where the LOS/NLOS indicator is generated for a link between UE and TRP;

·        Timing estimation accuracy (expressed in meters), if the model output includes timing estimation (e.g., ToA, RSTD).

·        Angle estimation accuracy (in degrees), if the model output includes angle estimation (e.g., AoA, AoD).

·        Companies provide info on how LOS classification accuracy and timing/angle estimation accuracy are estimated, if the ML output is a soft value that represents a probability distribution (e.g., probability of LOS, probability of timing, probability of angle, mean and variance of timing/angle, etc.)

 

R1-2210387         Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

R1-2210388         Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

R1-2210650        Summary #5 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Oct 18th GTW session

Conclusion

For evaluation of AI/ML based positioning, it’s up to each company to take into account the channel estimation error in their evaluation. Companies describe the details of their simulation assumption, e.g., realistic or ideal channel estimation, error models, receiver algorithms.

 

 

R1-2210651         Summary #6 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

R1-2210652        Final Summary of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Oct 19th GTW session

 

Agreement

For AI/ML assisted positioning, when single-TRP construction is used for the AI/ML model, companies report at least the AI/ML complexity (Model complexity, Computation complexity) for N TRPs, which are used to determine the position of a target UE.

Table. Model complexity and computation complexity to support N TRPs for a target UE

 

Model complexity to support N TRPs

Computation complexity to process N TRPs

Single-TRP, same model for N TRPs

When the model is at UE-side, where  is the model complexity for the same model.

FFS: if the model is at network-side

Where  is the computation complexity of the same model for one TRP.

Single-TRP, N models for N TRPs

When the model is at UE-side,

Where  is the model complexity for the i-th AI/ML model.

FFS: if the model is at network-side

Where  is the computation complexity for the i-th AI/ML model.

Multi-TRP (i.e., one model for N TRPs)

Where  is the model complexity for the one model.

Where  is the computation complexity for the one model.

 

Agreement

For AI/ML based positioning, if an InF scenario different from InF-DH is evaluated for the model generalization capability, the selected parameters (e.g., clutter parameters) are compliant with TR 38.901 Table 7.2-4 (Evaluation parameters for InF).

·        Note: In TR 38.857 Table 6.1-1 (Parameters common to InF scenarios), InF-SH scenario uses the clutter parameter {20%, 2m, 10m} which is compliant with TR 38.901.

Agreement

For the model input used in evaluations of AI/ML based positioning, if time-domain channel impulse response (CIR) or power delay profile (PDP) is used as model input in the evaluation, companies report the input dimension NTRP * Nport * Nt, where NTRP is the number of TRPs, Nport is the number of transmit/receive antenna port pairs, Nt is the number of time domain samples.

·        Note: CIR and PDP may have different dimensions.

·        Note: Companies provide details on their assumption on how PDP is constructed and how (if applicable) it is mapped to Nt samples.

9.2.4.22       Other aspects on AI/ML for positioning accuracy enhancement

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2208400         Other Aspects of AI/ML Based Positioning Enhancement        Ericsson

R1-2208434         Discussion on AI/ML for positioning accuracy enhancement   Huawei, HiSilicon

R1-2208526         Discussion on other aspects for AI positioning enhancement    ZTE

R1-2208551         Discussion on other aspects on AIML for positioning accuracy enhancement               Spreadtrum Communications

R1-2208639         Other aspects on AI/ML for positioning accuracy enhancement              vivo

R1-2208855         On sub use cases and other aspects of AI/ML for positioning accuracy enhancement               OPPO

R1-2208883         On Enhancement of AI/ML based Positioning             Google

R1-2208904         Other aspects on AI/ML for positioning accuracy enhancement              LG Electronics

R1-2208972         Discussion on AI/ML for positioning enhancement    CATT

R1-2209016         Discussions on sub use cases and specification impacts for AIML positioning               Fujitsu

R1-2209097         Discussion on AI/ML for positioning accuracy enhancement   Sony

R1-2209125         AI/ML Positioning use cases and Associated Impacts Lenovo

R1-2209147         Other aspects on AI/ML for positioning        NEC

R1-2209235         Discussions on AI-ML for positioning accuracy enhancement CAICT

R1-2209282         Views on the other aspects of AI/ML-based positioning accuracy enhancement               xiaomi

R1-2209333         Discussion on other aspects on AI/ML for positioning accuracy enhancement               CMCC

R1-2209372         Other aspects on ML for positioning accuracy enhancement     Nokia, Nokia Shanghai Bell

R1-2209485         Designs and potential specification impacts of AIML for positioning     InterDigital, Inc.

R1-2209538         On potential specification impact of AI/ML for positioning      Faunhofer IIS, Fraunhofer HHI

R1-2209581         Other aspects on AI/ML for positioning accuracy enhancement              Apple

R1-2209616         Discussion on AI/ML for positioning accuracy enhancement   Rakuten Symphony

R1-2209630         AI and ML for positioning enhancement       NVIDIA

R1-2209727         Representative sub use cases for Positioning Samsung

R1-2209900         Discussion on AI/ML for positioning accuracy enhancement   NTT DOCOMO, INC.

R1-2209981         Other aspects on AI/ML for positioning accuracy enhancement              Qualcomm Incorporated

 

[110bis-e-R18-AI/ML-07] – Huaming (vivo)

Email discussion on other aspects of AI/ML for positioning accuracy enhancement by October 19

-        Check points: October 14, October 19

R1-2210308         FL summary #1 of [110bis-e-R18-AI/ML-07]             Moderator (vivo)

R1-2210427        FL summary #2 of [110bis-e-R18-AI/ML-07]          Moderator (vivo)

From Oct 14th GTW session

Conclusion

·        Defer the discussion of prioritization of online/offline training for AI/ML based positioning until more progress on online vs. offline training discussion in agenda 9.2.1.

Agreement

·        Study and provide inputs on benefit(s) and potential specification impact at least for the following cases of AI/ML based positioning accuracy enhancement

o   Case 1: UE-based positioning with UE-side model, direct AI/ML or AI/ML assisted positioning

o   Case 2a: UE-assisted/LMF-based positioning with UE-side model, AI/ML assisted positioning

o   Case 2b: UE-assisted/LMF-based positioning with LMF-side model, direct AI/ML positioning

o   Case 3a: NG-RAN node assisted positioning with gNB-side model, AI/ML assisted positioning

o   Case 3b: NG-RAN node assisted positioning with LMF-side model, direct AI/ML positioning

Agreement

Regarding AI/ML model indication[/configuration], to study and provide inputs on potential specification impact at least for the following aspects on conditions/criteria of AI/ML model for AI/ML based positioning accuracy enhancement

·        Validity conditions, e.g., applicable area/[zone/]scenario/environment and time interval, etc.

·        Model capability, e.g., positioning accuracy quality and model inference latency

·        Conditions and requirements, e.g., required assistance signalling and/or reference signals configurations, dataset information

·        Note: other aspects are not precluded

Agreement

Regarding AI/ML model monitoring for AI/ML based positioning, to study and provide inputs on potential specification impact for the following aspects

 

 

R1-2210565        FL summary #3 of [110bis-e-R18-AI/ML-07]          Moderator (vivo)

Presented in Oct 18th GTW session

 

R1-2210669        FL summary #4 of [110bis-e-R18-AI/ML-07]          Moderator (vivo)

From Oct 19th GTW session

Agreement

Regarding data collection for AI/ML model training for AI/ML based positioning, at least for each of the agreed cases (Case 1 to Case 3b)


 RAN1#111

9.2       Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface

Please refer to RP-221348 for detailed scope of the SI.

 

R1-2212845        Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface)            Ad-hoc Chair (CMCC)

Endorsed and contents incorporated below.

 

[111-R18-AI/ML] – Taesang (Qualcomm)

To be used for sharing updates on online/offline schedule, details on what is to be discussed in online/offline sessions, tdoc number of the moderator summary for online session, etc

 

R1-2212106         Technical report for Rel-18 SI on AI and ML for NR air interface          Qualcomm Incorporated

9.2.1        General aspects of AI/ML framework

Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.

 

R1-2210840         Continued discussion on common AI/ML characteristics and operations               FUTUREWEI

R1-2210884         Discussion on general aspects of AI/ML framework   Huawei, HiSilicon

R1-2210997         Discussions on AI/ML framework  vivo

R1-2211056         Discussion on general aspects of common AI PHY framework ZTE

R1-2211072         Discussion on general aspects of AI/ML framework   Fujitsu

R1-2211123         On General Aspects of AI/ML Framework   Google

R1-2211188         General aspects of AI/ML framework           CATT

R1-2211215         Discussion on general aspects of AI/ML framework   KDDI Corporation

R1-2211226         Discussion on general aspects of AIML framework    Spreadtrum Communications

R1-2211287         Discussion on general aspects of AI/ML framework   Ericsson

R1-2211354         Views on the general aspects of AI/ML framework    xiaomi

R1-2211392         Discussion on general aspects of AI/ML framework   Intel Corporation

R1-2211477         On general aspects of AI/ML framework      OPPO

R1-2211508         Discussions on Common Aspects of AI/ML Framework           TCL Communication

R1-2211555         Discussion on general aspects of AI/ML framework for NR air interface               ETRI

R1-2211606         Considerations on common AI/ML framework           Sony

R1-2211671         Discussion on general aspects of AI/ML framework   CMCC

R1-2211714         General aspects of AI and ML framework for NR air interface NVIDIA

R1-2211729         Discussion on general aspects of AI/ML framework   InterDigital, Inc.

R1-2211772         General aspects of AI/ML framework           Lenovo

R1-2211804         Discussion on general aspect of AI/ML framework    Apple

R1-2211866         General aspects on AI/ML framework           LG Electronics

R1-2211910         Considerations on general aspects on AI-ML framework          CAICT

R1-2211933         Discussion on general aspects of AI/ML framework   Panasonic

R1-2211934         General aspects of AI/ML framework           AT&T

R1-2211976         Discussion on general aspects of AI/ML framework   NTT DOCOMO, INC.

R1-2212035         General aspects of AI ML framework and evaluation methodology        Samsung

R1-2212107         General aspects of AI/ML framework           Qualcomm Incorporated

R1-2212225         General aspects of AI/ML framework           MediaTek Inc.

R1-2212312         Discussion on AI/ML Model Life Cycle Management Rakuten Mobile, Inc

R1-2212326         Further discussion on the general aspects of ML for Air-interface          Nokia, Nokia Shanghai Bell

R1-2212355         Discussion on general aspects of AI ML framework   NEC

 

R1-2212654        Summary#1 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

From Nov 14th session

Agreement

For UE-part/UE-side models, study the following mechanisms for LCM procedures:

 

R1-2212655        Summary#2 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

From Nov 15th session

Working Assumption

Consider “proprietary model” and “open-format model” as two separate model format categories for RAN1 discussion,

 

Proprietary-format models

ML models of vendor-/device-specific proprietary format, from 3GPP perspective

NOTE: An example is a device-specific binary executable format

Open-format models

ML models of specified format that are mutually recognizable across vendors and allow interoperability, from 3GPP perspective

From RAN1 discussion viewpoint, RAN1 may assume that:

·        Proprietary-format models are not mutually recognizable across vendors, hide model design information from other vendors when shared.

·        Open-format models are mutually recognizable between vendors, do not hide model design information from other vendors when shared

 

R1-2212656        Summary#3 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

Presented in Nov 16th session.

 

R1-2212657        Summary#4 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

From Nov 17th session

Working Assumption

Terminology

Description

Model identification

A process/method of identifying an AI/ML model for the common understanding between the NW and the UE

Note: The process/method of model identification may or may not be applicable.

Note: Information regarding the AI/ML model may be shared during model identification.

 

Terminology

Description

Functionality identification

A process/method of identifying an AI/ML functionality for the common understanding between the NW and the UE

Note: Information regarding the AI/ML functionality may be shared during functionality identification.

FFS: granularity of functionality

Note: whether and how to indicate Functionality will be discussed separately.

 

 

R1-2212658        Final summary of General Aspects of AI/ML Framework  Moderator (Qualcomm)

From Nov 18th session

Working Assumption

Terminology

Description

Model update

Process of updating the model parameters and/or model structure of a model

Model parameter update

Process of updating the model parameters of a model

 

 

Final summary in R1-2213003.

9.2.2        AI/ML for CSI feedback enhancement

9.2.2.1       Evaluation on AI/ML for CSI feedback enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2210841         Continued discussion on evaluation of AI/ML for CSI feedback enhancement               FUTUREWEI

R1-2210885         Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2210954         Evaluation of AI-CSI         Ericsson

R1-2210998         Evaluation on AI/ML for CSI feedback enhancement vivo

R1-2211057         Evaluation on AI for CSI feedback enhancement        ZTE

R1-2211073         Evaluation on AI/ML for CSI feedback enhancement Fujitsu

R1-2211124         On Evaluation of AI/ML based CSI Google

R1-2211189         Evaluation methodology and  results on AI/ML for CSI feedback enhancement               CATT

R1-2211227         Discussion on evaluation on AIML for CSI feedback enhancement        Spreadtrum Communications, BUPT

R1-2211258         Evaluation on AI/ML for CSI feedback enhancement Comba

R1-2211355         Discussion on evaluation on AI/ML for CSI feedback enhancement       xiaomi

R1-2211393         Evaluation for CSI feedback enhancements  Intel Corporation

R1-2211478         Evaluation methodology and preliminary results on AI/ML for CSI feedback enhancement       OPPO

R1-2211525         Evaluation on AI/ML for CSI feedback enhancement China Telecom

R1-2211556         Evaluation on AI/ML for CSI feedback enhancement ETRI

R1-2211589         Evaluation of AI/ML based methods for CSI feedback enhancement      Fraunhofer IIS, Fraunhofer HHI

R1-2211672         Discussion on evaluation on AI/ML for CSI feedback enhancement       CMCC

R1-2211716         Evaluation of AI and ML for CSI feedback enhancement         NVIDIA

R1-2211731         Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2211773         Evaluation on AI/ML for CSI feedback         Lenovo

R1-2211805         Evaluation for AI/ML based CSI feedback enhancement          Apple

R1-2211867         Evaluation on AI/ML for CSI feedback enhancement LG Electronics

R1-2211892         Model Quantization for CSI feedback           Sharp

R1-2211911         Some discussions on evaluation on AI-ML for CSI feedback   CAICT

R1-2211977         Discussion on evaluation on AI/ML for CSI feedback enhancement       NTT DOCOMO, INC.

R1-2212036         Evaluation on AI ML for CSI feedback enhancement Samsung

R1-2212108         Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated

R1-2212226         Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.

R1-2212327         Evaluation of ML for CSI feedback enhancement       Nokia, Nokia Shanghai Bell

R1-2212452         Discussion on AI/ML for CSI feedback enhancement AT&T

 

R1-2212669         Summary#1 for CSI evaluation of [111-R18-AI/ML] Moderator (Huawei)

R1-2212670        Summary#2 for CSI evaluation of [111-R18-AI/ML]            Moderator (Huawei)

From Nov 16th session

Working Assumption

The following initial template is considered for companies to report the evaluation results of AI/ML-based CSI compression without generalization/scalability verification

·        FFS the description and results for generalization/scalability may need a separate table

·        FFS the value or range of payload size X/Y/Z

·        FFS the description and results for different training types/cases may need a separate table

·        FFS: training related overhead

Table X. Evaluation results for CSI compression without model generalization/scalability, [traffic type], [Max rank value], [RU] [training type/case]

 

 

Source 1

 

CSI generation part

AL/ML model backbone

 

 

 

Pre-processing

 

 

 

Post-processing

 

 

 

FLOPs/M

 

 

 

Number of parameters/M

 

 

 

[Storage /Mbytes]

 

 

 

CSI reconstruction part

AL/ML model backbone

 

 

 

[Pre-processing]

 

 

 

[Post-processing]

 

 

 

FLOPs/M

 

 

 

Number of parameters/M

 

 

 

[Storage /Mbytes]

 

 

 

Common description

Input type

 

 

 

Output type

 

 

 

Quantization /dequantization method

 

 

 

Dataset description

Train/k

 

 

 

Test/k

 

 

 

Ground-truth CSI quantization method

 

 

 

[Other assumptions/settings agreed to be reported]

 

 

 

Benchmark

 

 

 

Intermediate KPI I#1 of benchmark, [layer 1]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

Intermediate KPI I#1 of benchmark, [layer 2]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

Gain for intermediate KPI I#1, [layer 1]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

Gain for intermediate KPI#1, [layer 2]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

 

 

 

 

Intermediate KPI I#2 of benchmark, [layer 1]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

Intermediate KPI I#2 of benchmark, [layer 2]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

Gain for intermediate KPI I#2, [layer 1]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

Gain for intermediate KPI#2, [layer 2]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

 

 

 

 

Gain for Mean UPT

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

Gain for 5% UPT

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

 

 

 

 

FFS others

 

 

 

 

 

Agreement

For the evaluation of an example of Type 3 (Separate training at NW side and UE side), the following evaluation cases for sequential training are considered for multi-vendors

·        Case 1 (baseline): Type 3 training between one NW part model and one UE part model

o   Note 1: Case 1 can be naturally applied to the NW-first training case where 1 NW part model to M>1 separate UE part models

§  Companies to report the dataset used between the NW part model and the UE part model, e.g., whether dataset for training UE part model is the same or a subset of the dataset for training NW part model

o   Note 2: Case 1 can be naturally applied to the UE-first training case where 1 UE part model to N>1 separate NW part models

§  Companies to report the dataset used between the NW part model and the UE part model, e.g., whether dataset for training NW part model is the same or a subset of the dataset for training UE part model

o   Companies to report the AI/ML structures for the combination(s) of UE part model and NW part model, which can be the same or different

o   FFS: different quantization methods between NW side and UE side

·        Case 2: For UE-first training, Type 3 training between one NW part model and M>1 separate UE part models

o   Note: Case 2 can be also applied to the M>1 UE part models to N>1 NW part models

o   Companies to report the AI/ML structures for the M>1 UE part models and the NW part model

o   Companies to report the dataset used at UE part models, e.g., same or different dataset(s) among M UE part models

·        Case 3: For NW-first training, Type 3 training between one UE part model and N>1 separate NW part models

o   Note: Case 3 can be also applied to the N>1 NW part models to M>1 UE part models

o   Companies to report the AI/ML structures for the UE part model and the N>1 NW part models

o   Companies to report the dataset used at NW part models, e.g., same or different dataset(s) among N NW part models

·        FFS: whether/how to report overhead of dataset

 

R1-2212671         Summary#3 for CSI evaluation of [111-R18-AI/ML] Moderator (Huawei)

R1-2212672        Summary#4 for CSI evaluation of [111-R18-AI/ML]            Moderator (Huawei)

From Nov 17th session

Working Assumption

For the AI/ML based CSI prediction sub use case, the nearest historical CSI w/o prediction as well as non-AI/ML/collaboration level x AI/ML based CSI prediction approach are both taken as baselines for the benchmark of performance comparison, and the specific non-AI/ML/collaboration level x AI/ML based CSI prediction is reported by companies.

 

Agreement

For evaluating the generalization/scalability over various configurations for CSI compression, to achieve the scalability over different input dimensions of CSI generation part (e.g., different bandwidths/frequency granularities, or different antenna ports), the generalization cases of are elaborated as follows

·         Case 1: The AI/ML model is trained based on training dataset from a fixed dimension X1 (e.g., a fixed bandwidth/frequency granularity, and/or number of antenna ports), and then the AI/ML model performs inference/test on a dataset from the same dimension X1.

·         Case 2: The AI/ML model is trained based on training dataset from a single dimension X1, and then the AI/ML model performs inference/test on a dataset from a different dimension X2.

·         Case 3: The AI/ML model is trained based on training dataset by mixing datasets subject to multiple dimensions of X1, X2,..., Xn, and then the AI/ML model performs inference/test on a single dataset subject to the dimension of X1, or X2,…, or Xn.

·         Note: For Case 2/3, the solutions to achieve the scalability between Xi and Xj, are reported by companies, including, e.g., pre-processing to angle-delay domain, padding, additional adaptation layer in AI/ML model, etc.

·         FFS the verification of fine-tuning

·         FFS other additional cases

Agreement

For evaluating the generalization/scalability over various configurations for CSI compression, to achieve the scalability over different output dimensions of CSI generation part (e.g., different generated CSI feedback dimensions), the generalization cases of are elaborated as follows

·         Case 1: The AI/ML model is trained based on training dataset from a fixed output dimension Y1 (e.g., a fixed CSI feedback dimension), and then the AI/ML model performs inference/test on a dataset from the same output dimension Y1.

·         Case 2: The AI/ML model is trained based on training dataset from a single output dimension Y1, and then the AI/ML model performs inference/test on a dataset from a different output dimension Y2.

·         Case 3: The AI/ML model is trained based on training dataset by mixing datasets subject to multiple dimensions of Y1, Y2,..., Yn, and then the AI/ML model performs inference/test on a single dataset of Y1, or Y2,…, or Yn.

·         Note: For Case 1/2/3, companies to report whether the output of the CSI generation part is before quantization or after quantization.

·         Note: For Case 2/3, the solutions to achieve the scalability between Yi and Yj, are reported by companies, including, e.g., truncation, additional adaptation layer in AI/ML model, etc.

·         FFS the verification of fine-tuning

·         FFS other additional cases

 

R1-2212673        Summary#5 for CSI evaluation of [111-R18-AI/ML]            Moderator (Huawei)

From Nov 17th session

Agreement

For the evaluation of the high resolution quantization of the ground-truth CSI in the CSI compression, Float32 is adopted as the baseline/upper-bound of performance comparison.

 

Agreement

For the evaluation of quantization aware/non-aware training, the following cases are considered and reported by companies:

 

Agreement

For the evaluation of an example of Type 3 (Separate training at NW side and UE side) with sequential training, companies to report the set of information (e.g., dataset) shared in Step 2

 

Working Assumption

For the AI/ML based CSI prediction sub use case, the following initial template is considered for companies to report the evaluation results of AI/ML-based CSI prediction for the case without generalization/scalability verification

·        FFS the description and results for generalization/scalability may need a separate table

·        FFS whether/how to capture the muliptle predicted CSI instances and their mapping to slots

Table X. Evaluation results for CSI prediction without model generalization/scalability, [traffic type], [Max rank value], [RU]

 

 

Source 1

AI/ML model description

AL/ML model backbone

 

 

[Pre-processing]

 

 

[Post-processing]

 

 

FLOPs/M

 

 

Parameters/M

 

 

[Storage /Mbytes]

 

 

Input type

 

 

Output type

 

 

Assumption

UE speed

 

 

CSI feedback periodicity

 

 

Observation window (number/distance)

 

 

Prediction window (number/distance)

 

 

Whether/how to adopt spatial consistency

 

 

Dataset size

Train/k

 

 

Test/k

 

 

Benchmark 1

 

 

Intermediate KPI #1 of Benchmark 1

 

 

 

Gain for intermediate KPI#1 over Benchmark 1

 

 

 

Intermediate KPI #2 of Benchmark 1

 

 

 

Gain for intermediate KPI#2 over Benchmark 1

 

 

 

Gain for eventual KPI (Benchmark 1)

Mean UPT

 

 

5% UPT

 

 

Benchmark 2

 

 

Intermediate KPI #1 of Benchmark 2

 

 

 

Gain for intermediate KPI#1 over Benchmark 2

 

 

 

Intermediate KPI #2 of Benchmark 2

 

 

 

Gain for intermediate KPI#2 over Benchmark 2

 

 

 

Gain for eventual KPI (Benchmark 2)

Mean UPT

 

 

5% UPT

 

 

FFS others

 

 

 

 

Agreement

For evaluating the generalization/scalability over various configurations for CSI compression, to achieve the scalability over different input/output dimensions, companies to report which case(s) in the following are evaluated

·         Case 0 (benchmark for comparison): One CSI generation part with fixed input and output dimensions to 1 CSI reconstruction part with fixed input and output dimensions for each of the different input and/or output dimensions.

·         Case 1: One CSI generation part with scalable input and/or output dimensions to N>1 separate CSI reconstruction parts each with fixed and different output and/or input dimensions

·         Case 2: M>1 separate CSI generation parts each with fixed and different input and/or output dimensions to one CSI reconstruction part with scalable output and/or input dimensions

·         Case 3: A pair of CSI generation part with scalable input/output dimensions and CSI reconstruction part with scalable output and/or input dimensions

Agreement

For the evaluation of the high resolution quantization of the ground-truth CSI in the CSI compression, if R16 Type II-like method is considered, companies to report the R16 Type II parameters with specified or new/larger values to achieve higher resolution of the ground-truth CSI labels, e.g., L,, , reference amplitude, differential amplitude, phase, etc.

 

 

Final summary in R1-2212966.

9.2.2.2       Other aspects on AI/ML for CSI feedback enhancement

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2210842         Continued discussion on other aspects of AI/ML for CSI feedback enhancement               FUTUREWEI

R1-2210886         Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2210955         Discussion on AI-CSI        Ericsson

R1-2210999         Other aspects on AI/ML for CSI feedback enhancement           vivo

R1-2211058         Discussion on other aspects for AI CSI feedback enhancement ZTE

R1-2211074         Views on specification impact for CSI compression with two-sided model               Fujitsu

R1-2211125         On Enhancement of AI/ML based CSI           Google

R1-2211133         Discussion on AI/ML for CSI feedback enhancement Panasonic

R1-2211190         Other aspects on AI/ML for CSI feedback enhancement           CATT

R1-2211228         Discussion on other aspects on AIML for CSI feedback            Spreadtrum Communications

R1-2211356         Views on potential specification impact for CSI feedback based on AI/ML               xiaomi

R1-2211394         Use-cases and specification for CSI feedback              Intel Corporation

R1-2211479         On sub use cases and other aspects of AI/ML for CSI feedback enhancement               OPPO

R1-2211509         Discussions on Sub-Use Cases in AI/ML for CSI Feedback Enhancement            TCL Communication

R1-2211526         Discussion on AI/ML for CSI feedback enhancement China Telecom

R1-2212542         Discussion on other aspects on AI/ML for CSI feedback enhancement  ETRI      (rev of R1-2211557)

R1-2211607         Considerations on CSI measurement enhancements via AI/ML Sony

R1-2211673         Discussion on other aspects on AI/ML for CSI feedback enhancement  CMCC

R1-2211718         AI and ML for CSI feedback enhancement   NVIDIA

R1-2211733         Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2211750         Discussion on AI/ML for CSI feedback enhancement NEC

R1-2211774         Further aspects of AI/ML for CSI feedback  Lenovo

R1-2211806         Discussion on other aspects of AI/ML for CSI enhancement    Apple

R1-2211868         Other aspects on AI/ML for CSI feedback enhancement           LG Electronics

R1-2211912         Discussions on AI-ML for CSI feedback       CAICT

R1-2211978         Discussion on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.

R1-2212037         Representative sub use cases for CSI feedback enhancement    Samsung

R1-2212109         Other aspects on AI/ML for CSI feedback enhancement           Qualcomm Incorporated

R1-2212227         Other aspects on AI/ML for CSI feedback enhancement           MediaTek Inc.

R1-2212328         Other aspects on ML for CSI feedback enhancement  Nokia, Nokia Shanghai Bell

R1-2212453         Discussion on AI/ML for CSI feedback enhancement AT&T

 

R1-2212641        Summary #1 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

 

R1-2212642        Summary #2 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From Nov 16th session

Agreement

Time domain CSI prediction using UE sided model is selected as a representative sub-use case for CSI enhancement.

Note: Continue evaluation discussion in 9.2.2.1.

Note: RAN1 defer potential specification impact discussion at 9.2.2.2 until the RAN1#112b-e, and RAN1 will revisit at RAN1#112b-e whether to defer further till the end of R18 AI/ML SI.

Note: LCM related potential specification impact follow the high level principle of other one-sided model sub-cases.

 

 

R1-2212643         Summary #3 on other aspects of AI/ML for CSI enhancement Moderator (Apple)

R1-2212644         Summary #4 on other aspects of AI/ML for CSI enhancement Moderator (Apple)

R1-2212909        Summary #5 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From Nov 18th session

Conclusion

In CSI compression using two-sided model use case, training collaboration type 2 over the air interface for model training (not including model update) is deprioritized in R18 SI.

 

Note:

·         To align terminology, output CSI assumed at UE in previous agreement will be referred as output-CSI-UE.

·         To align terminology, input-CSI-NW is the input CSI assumed at NW.

9.2.3        AI/ML for beam management

9.2.3.1       Evaluation on AI/ML for beam management

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2210843         Continued discussion on evaluation of AI/ML for beam management               FUTUREWEI

R1-2210887         Evaluation on AI/ML for beam management Huawei, HiSilicon

R1-2211000         Evaluation on AI/ML for beam management vivo

R1-2211059         Evaluation on AI for beam management       ZTE

R1-2211075         Evaluation on AI/ML for beam management Fujitsu

R1-2211126         On Evaluation of AI/ML based Beam Management    Google

R1-2211191         Evaluation methodology and  results on AI/ML for beam management  CATT

R1-2211229         Evaluation on AI for beam management       Spreadtrum Communications

R1-2211288         Evaluation of AIML for beam management  Ericsson

R1-2211315         Discussion for evaluation on AI/ML for beam management     InterDigital, Inc.

R1-2211357         Evaluation on AI/ML for beam management xiaomi

R1-2211395         Evaluations for AI/ML beam management   Intel Corporation

R1-2211480         Evaluation methodology and preliminary results on AI/ML for beam management               OPPO

R1-2211527         Evaluation on AI/ML for beam management China Telecom

R1-2211674         Discussion on evaluation on AI/ML for beam management      CMCC

R1-2211719         Evaluation of AI and ML for beam management         NVIDIA

R1-2211775         Evaluation on AI/ML for beam management Lenovo

R1-2211807         Evaluation on AI/ML for beam management Apple

R1-2211869         Evaluation on AI/ML for beam management LG Electronics

R1-2211913         Some discussions on evaluation on AI-ML for Beam management         CAICT

R1-2211979         Discussion on evaluation on AI/ML for beam management      NTT DOCOMO, INC.

R1-2212038         Evaluation on AI ML for Beam management              Samsung

R1-2212110         Evaluation on AI/ML for beam management Qualcomm Incorporated

R1-2212228         Evaluation on AI/ML for beam management MediaTek Inc.

R1-2212329         Evaluation of ML for beam management      Nokia, Nokia Shanghai Bell

R1-2212423         Evaluation on AI/ML for beam management CEWiT

 

R1-2212591         Feature lead summary #0 evaluation of AI/ML for beam management   Moderator (Samsung)

R1-2212592        Feature lead summary #1 evaluation of AI/ML for beam management               Moderator (Samsung)

From Nov 15th session

Agreement

The following cases are considered for verifying the generalization performance of an AI/ML model over various scenarios/configurations as a starting point:

 

Agreement

 

 

R1-2212593        Feature lead summary #2 evaluation of AI/ML for beam management               Moderator (Samsung)

From Nov 16th session

Agreement

 

Agreement

For BM Case-1 and BM Case 2, to verify the generalization performance of an AI/ML model over various scenarios/configurations, additionally considering

·        Various Set B of beam(pairs)

 

Agreement

At least for evaluation on the performance of DL Tx beam prediction, consider the following options for Rx beam for providing input for AI/ML model for training and/or inference if applicable

 

 

R1-2212594        Feature lead summary #3 evaluation of AI/ML for beam management               Moderator (Samsung)

From Nov 17th session

Agreement

·        For generalization performance verification, consider the following

o   Scenarios

§  Various deployment scenarios,

·        e.g., UMa, UMi and others,

·        e.g., 200m ISD or 500m ISD and others

·        e.g., same deployment, different cells with different configuration/assumption

·        e.g., gNB height and UE height

·        FFS: e.g., Carrier frequencies

§  Various outdoor/indoor UE distributions, e.g., 100%/0%, 20%/80%, and others

§  Various UE mobility,

·        e.g., 3km/h, 30km/h, 60km/h and others

o   Configurations (parameters and settings)

§  Various UE parameters, e.g., number of UE Rx beams (including number of panels and UE antenna array dimensions)

§  Various gNB settings, e.g., DL Tx beam codebook (including various Set A of beam(pairs) and gNB antenna array dimensions)

§  Various Set B of beam (pairs)

§  T1 for measurement /T2 for prediction for BM-Case2

o   Other scenarios/configurations(parameters and settings) are not precluded and can be reported by companies.

 

 

R1-2212904        Feature lead summary #4 evaluation of AI/ML for beam management               Moderator (Samsung)

From Nov 18th session

Agreement

·        For the evaluation of the overhead for BM-Case2, adoption the following metrics:

o   RS overhead reduction,

§        Option 2:

·        where N is the total number of beams (pairs) (with reference signal (SSB and/or CSI-RS)) required for measurement for AI/ML, including the beams (pairs) required for additional measurements before/after the prediction if applicable

·        where M is the total number of beams (pairs) (with reference signal (SSB and/or CSI-RS)) required for measurement for baseline scheme

·        Companies report the assumption on additional measurements

§        FFS: Option 3:  

·        where N is the number of beams (pairs) (with reference signal (SSB and/or CSI-RS)) required for measurement for AI/ML in each time instance

·        where M is the total number of beams (pairs) to be predicted for each time instance

·        where L is ratio of periodicity of time instance for measurements to periodicity of time instance for prediction

§  Companies report the assumption on T1 and T2 patterns

§  Other options are not precluded and can be reported by companies.

 

Final summary in R1-2212905.

9.2.3.2       Other aspects on AI/ML for beam management

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2210844         Continued discussion on other aspects of AI/ML for beam management               FUTUREWEI

R1-2210888         Discussion on AI/ML for beam management Huawei, HiSilicon

R1-2211001         Other aspects on AI/ML for beam management          vivo

R1-2211038         Discussion on other aspects of AI/ML beam management        New H3C Technologies Co., Ltd.

R1-2211060         Discussion on other aspects for AI beam management              ZTE

R1-2211076         Sub use cases and specification impact on AI/ML for beam management               Fujitsu

R1-2211127         On Enhancement of AI/ML based Beam Management              Google

R1-2211192         Other aspects on AI/ML for beam management          CATT

R1-2211230         Discussion on other aspects on AIML for beam management   Spreadtrum Communications

R1-2211289         Discussion on AI/ML for beam management Ericsson

R1-2211316         Discussion for other aspects on AI/ML for beam management InterDigital, Inc.

R1-2211358         Potential specification impact on AI/ML for beam management             xiaomi

R1-2211396         Use-cases and Specification Impact for AI/ML beam management         Intel Corporation

R1-2211481         Other aspects of AI/ML for beam management           OPPO

R1-2211510         Discussions on Sub-Use Cases in AI/ML for Beam Management           TCL Communication

R1-2211528         Other aspects on AI/ML for beam management          China Telecom

R1-2211558         Discussion on other aspects on AI/ML for beam management  ETRI

R1-2211590         Discussion on sub use cases of AI/ML beam management        Panasonic

R1-2211608         Consideration on AI/ML for beam management          Sony

R1-2211675         Discussion on other aspects on AI/ML for beam management  CMCC

R1-2211721         AI and ML for beam management  NVIDIA

R1-2211776         Further aspects of AI/ML for beam management        Lenovo

R1-2211808         Discussion on other aspects of AI/ML for beam management  Apple

R1-2211870         Other aspects on AI/ML for beam management          LG Electronics

R1-2211914         Discussions on AI-ML for Beam management            CAICT

R1-2211980         Discussion on AI/ML for beam management NTT DOCOMO, INC.

R1-2212039         Representative sub use cases for beam management   Samsung

R1-2212111         Other aspects on AI/ML for beam management          Qualcomm Incorporated

R1-2212150         Discussion on other aspects on AI/ML for beam management  KT Corp.

R1-2212229         Other aspects on AI/ML for beam management          MediaTek Inc.

R1-2212320         Other aspects on AI/ML for beam management          Rakuten Symphony

R1-2212330         Other aspects on ML for beam management Nokia, Nokia Shanghai Bell

R1-2212372         Discussion on AI/ML for beam management NEC

 

R1-2212718        Summary#1 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Nov 15th session

Agreement

For the sub use case BM-Case1 and BM-Case2, at least support Alt.1 and Alt.2 for AI/ML model training and inference for further study:

·        Alt.1. AI/ML model training and inference at NW side

·        Alt.2. AI/ML model training and inference at UE side

·        The discussion on Alt.3 for BM-Case1 and BM-Case2 is dependent on the conclusion/agreement of Agenda item 9.2.1 of RAN1 and/or RAN2 on whether to support model transfer for UE-side AI/ML model or not

o   Alt.3. AI/ML model training at NW side, AI/ML model inference at UE side

 

R1-2212719        Summary#2 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Nov 16th session

Agreement

For BM-Case1 and BM-Case2 with a network-side AI/ML model, study potential specification impact on the following L1 reporting enhancement for AI/ML model inference

·        UE to report the measurement results of more than 4 beams in one reporting instance

·        Other L1 reporting enhancements can be considered

Agreement

Regarding the data collection for AI/ML model training at UE side, study the potential specification impact considering the following additional aspects.

·        Whether and how to initiate data collection

·        Configurations, e.g., configuration related to set A and/or Set B, information on association/mapping of Set A and Set B

·        Assistance information from Network to UE (If supported)

·        Other aspect(s) is not precluded

 

R1-2212720        Summary#3 for other aspects on AI/ML for beam management       Moderator (OPPO)

Presented in Nov 17th session.

 

R1-2212927        Summary#4 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Nov 18th session

Agreement

Regarding NW-side model monitoring for a network-side AI/ML model of BM-Case1 and BM-Case2, study the necessity and the potential specification impacts from the following aspects:

·        UE reporting of beam measurement(s) based on a set of beams indicated by gNB.

·        Signaling, e.g., RRC-based, L1-based.

·        Note: Performance and UE complexity, power consumption should be considered.

9.2.4        AI/ML for positioning accuracy enhancement

9.2.4.1       Evaluation on AI/ML for positioning accuracy enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2210854         Evaluation of AI/ML for Positioning Accuracy Enhancement  Ericsson

R1-2210889         Evaluation on AI/ML for positioning accuracy enhancement    Huawei, HiSilicon

R1-2211002         Evaluation on AI/ML for positioning accuracy enhancement    vivo

R1-2211061         Evaluation on AI for positioning enhancement            ZTE

R1-2211077         Further evaluation results and discussions of AI positioning accuracy enhancement               Fujitsu

R1-2211128         On Evaluation of AI/ML based Positioning  Google

R1-2211193         Evaluation methodology and  results on AI/ML for positioning enhancement               CATT

R1-2211359         Evaluation on AI/ML for positioning accuracy enhancement    xiaomi

R1-2211482         Evaluation methodology and preliminary results on AI/ML for positioning accuracy enhancement       OPPO

R1-2211529         Evaluation on AI/ML for positioning accuracy enhancement    China Telecom

R1-2211676         Discussion on evaluation on AI/ML for positioning accuracy enhancement               CMCC

R1-2211715         Evaluation on AI/ML for positioning accuracy enhancement    InterDigital, Inc.

R1-2211722         Evaluation of AI and ML for positioning enhancement             NVIDIA

R1-2211777         Discussion on AI/ML Positioning Evaluations            Lenovo

R1-2211809         On Evaluation on AI/ML for positioning accuracy enhancement            Apple

R1-2211871         Evaluation on AI/ML for positioning accuracy enhancement    LG Electronics

R1-2211915         Some discussions on evaluation on AI-ML for positioning accuracy enhancement               CAICT

R1-2212040         Evaluation on AI ML for Positioning            Samsung

R1-2212112         Evaluation on AI/ML for positioning accuracy enhancement    Qualcomm Incorporated

R1-2212230         Evaluation on AI/ML for positioning accuracy enhancement    MediaTek Inc.

R1-2212331         Evaluation of ML for positioning accuracy enhancement          Nokia, Nokia Shanghai Bell

R1-2212382         Evaluation on AI/ML for positioning accuracy enhancement    Fraunhofer IIS, Fraunhofer HHI

 

R1-2212610        Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Nov 15th session

Agreement

Study how AI/ML positioning accuracy is affected by: user density/size of the training dataset.

Note: details of user density/size of training dataset to be reported in the evaluation.

 

Agreement

For reporting the model input dimension NTRP * Nport * Nt of CIR and PDP, Nt refers to the first Nt consecutive time domain samples.

 

 

R1-2212611        Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Nov 16th session

Agreement

For reporting the model input dimension NTRP * Nport * Nt:

 

Agreement

At least for model inference of AI/ML assisted positioning, evaluate and report the AI/ML model output, including (a) the type of information (e.g., ToA, RSTD, AoD, AoA, LOS/NLOS indicator) to use as model output, (b) soft information vs hard information, (c) whether the model output can reuse existing measurement report (e.g., NRPPa, LPP).

 

Agreement

For AI/ML assisted positioning, evaluate the three constructions:

Note: Individual company may evaluate one or more of the three constructions.

 

Agreement

For AI/ML assisted approach, study the performance of model monitoring metrics at least where the metrics are obtained from inference accuracy of model output.

 

Agreement

For both direct and AI/ML assisted positioning methods, investigate at least the impact of the amount of fine-tuning data on the positioning accuracy of the fine-tuned model.

 

Agreement

For the RAN1#110bis agreement on the calculation of model complexity, the FFS are resolved with the following update:

 

Model complexity to support N TRPs

Single-TRP, same model for N TRPs

where  is the model complexity for one TRP and the same model is used for N TRPs.

Single-TRP, N models for N TRPs

where  is the model complexity for the i-th AI/ML model.

Note: The reported model complexity above is intended for inference and may not be directly applicable to complexity of other LCM aspects.

 

Observation

Direct AI/ML positioning can significantly improve the positioning accuracy compared to existing RAT-dependent positioning methods when the generalization aspects are not considered.

·        For InF-DH with clutter parameter setting {60%, 6m, 2m}, evaluation results submitted to RAN1#111 indicate that the direct AI/ML positioning can achieve horizontal positioning accuracy of <1m at CDF=90%, as compared to >15m for conventional positioning method.

 

R1-2212612        Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Nov 17th session

Agreement

For AI/ML based positioning, company optionally evaluate the impact of at least the following issues related to measurements on the positioning accuracy of the AI/ML model. The simulation assumptions reflecting these issues are up to companies.

 

 

R1-2212816        Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Nov 18th session

Conclusion

Companies describe how their computational complexity values are obtained.

·        It is out of 3GPP scope to consider computational complexity values that have platform-dependency and/or use implementation (hardware and software) optimization solutions.

Observation

AI/ML assisted positioning can significantly improve the positioning accuracy compared to existing RAT-dependent positioning methods when the generalization aspects are not considered.

Note: how to capture the observation(s) into TR is separate discussion.

 

Agreement

·        For AI/ML assisted approach, for a given AI/ML model design (e.g., input, output, single-TRP vs multi-TRP), identify the generalization aspects where model fine-tuning/mixed training dataset/model switching  is necessary.

 

Final summary in R1-2212817.

9.2.4.22       Other aspects on AI/ML for positioning accuracy enhancement

Including finalization of representative sub use cases (by RAN1#111) and discussions on potential specification impact.

 

R1-2210855         Other Aspects of AI/ML Based Positioning Enhancement        Ericsson

R1-2210890         Discussion on AI/ML for positioning accuracy enhancement   Huawei, HiSilicon

R1-2211003         Other aspects on AI/ML for positioning accuracy enhancement              vivo

R1-2211062         Discussion on other aspects for AI positioning enhancement    ZTE

R1-2211078         Discussions on spec impacts of model training, data collection, model identification and model monitoring for AIML for positioning accuracy enhancement Fujitsu

R1-2211129         On Enhancement of AI/ML based Positioning             Google

R1-2211194         Other aspects  on AI/ML for positioning enhancement              CATT

R1-2211231         Discussion on other aspects on AIML for positioning accuracy enhancement               Spreadtrum Communications

R1-2211360         Views on the other aspects of AI/ML-based positioning accuracy enhancement               xiaomi

R1-2211483         On sub use cases and other aspects of AI/ML for positioning accuracy enhancement               OPPO

R1-2211609         On AI/ML for positioning accuracy enhancement       Sony

R1-2211677         Discussion on other aspects on AI/ML for positioning accuracy enhancement               CMCC

R1-2211717         Designs and potential specification impacts of AIML for positioning     InterDigital, Inc.

R1-2211725         AI and ML for positioning enhancement       NVIDIA

R1-2211778         AI/ML Positioning use cases and Associated Impacts Lenovo

R1-2211810         On Other aspects on AI/ML for positioning accuracy enhancement        Apple

R1-2211872         Other aspects on AI/ML for positioning accuracy enhancement              LG Electronics

R1-2211916         Discussions on AI-ML for positioning accuracy enhancement CAICT

R1-2211981         Discussion on AI/ML for positioning accuracy enhancement   NTT DOCOMO, INC.

R1-2212041         Representative sub use cases for Positioning Samsung

R1-2212113         Other aspects on AI/ML for positioning accuracy enhancement              Qualcomm Incorporated

R1-2212214         Other aspects on AI-ML for positioning accuracy enhancement              Baicells

R1-2212231         Other aspects on AI/ML for positioning accuracy enhancement              MediaTek Inc.

R1-2212332         Other aspects on ML for positioning accuracy enhancement     Nokia, Nokia Shanghai Bell

R1-2212358         Discussion on AI/ML for positioning accuracy enhancement   NEC

R1-2212383         On potential AI/ML solutions for positioning              Fraunhofer IIS, Fraunhofer HHI

 

R1-2212549        FL summary #1 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

From Nov 15th session

Agreement

For the study of benefit(s) and potential specification impact for AI/ML based positioning accuracy enhancement, one-sided model whose inference is performed entirely at the UE or at the network is prioritized in Rel-18 SI.

 

Agreement

Regarding AI/ML model inference, to study and provide inputs on potential specification impact (including necessity and applicability of specifying AI/ML model input and/or output) at least for the following aspects for each of the agreed cases (Case 1 to Case 3b) in AI/ML based positioning accuracy enhancement

 

 

R1-2212742        FL summary #2 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

From Nov 16th session

Agreement

Regarding data collection for AI/ML model training for AI/ML based positioning,

 

 

R1-2212783        FL summary #3 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

From Nov 17th session

Agreement

Regarding data collection for AI/ML model training for AI/ML based positioning, study benefits, feasibility and potential specification impact (including necessity) for the following aspects

 

 

R1-2212877        FL summary #4 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

From Nov 18th session

Agreement

Regarding AI/ML model monitoring for AI/ML based positioning, to study and provide inputs on feasibility, potential benefits (if any) and potential specification impact at least for the following aspects

 

Agreement

For AI/ML based positioning accuracy enhancement, direct AI/ML positioning and AI/ML assisted positioning are selected as representative sub-use cases.


 RAN1#112

9.2       Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface

Please refer to RP-221348 for detailed scope of the SI.

 

R1-2302063        Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface)            Ad-hoc Chair (CMCC)

 

[112-R18-AI/ML] – Taesang (Qualcomm)

To be used for sharing updates on online/offline schedule, details on what is to be discussed in online/offline sessions, tdoc number of the moderator summary for online session, etc

 

R1-2301402         Technical report for Rel-18 SI on AI and ML for NR air interface          Qualcomm Incorporated

9.2.1        General aspects of AI/ML framework

Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.

 

R1-2300043         Discussion on common AI/ML characteristics and operations  FUTUREWEI

R1-2300107         Discussion on general aspects of AI/ML framework   Huawei, HiSilicon

R1-2300170         Discussion on general aspects of common AI PHY framework ZTE

R1-2300178         Discussion on general aspects of AIML framework    Ericsson

R1-2300210         Discussion on general aspects of AI/ML framework   Spreadtrum Communications

R1-2300279         On general aspects of AI/ML framework      OPPO

R1-2300396         On General Aspects of AI/ML Framework   Google

R1-2300443         Discussions on AI/ML framework  vivo

R1-2300529         General aspects on AI/ML framework           LG Electronics

R1-2300566         Views on the general aspects of AI/ML framework    xiaomi

R1-2300603         Further discussion on the general aspects of ML for Air-interface          Nokia, Nokia Shanghai Bell

R1-2300670         Discussion on general aspects of AI/ML framework   CATT

R1-2300743         Discussion on general aspects of AI/ML framework   Fujitsu

R1-2300823         Discussion on general aspects of AI ML framework   NEC

R1-2300840         Considerations on general aspects on AI-ML framework          CAICT

R1-2300868         Considerations on common AI/ML framework           Sony

R1-2300906         Discussion on general aspects of AI/ML framework   KDDI Corporation

R1-2300940         Discussion on general aspects of AI/ML framework   Intel Corporation

R1-2300989         Discussion on general aspects of AI/ML framework   CMCC

R1-2301040         Discussion on general aspects of AI/ML framework for NR air interface               ETRI

R1-2301139         General aspects of AI/ML framework           Fraunhofer IIS, Fraunhofer HHI

R1-2301147         Discussion on general aspects of AI/ML framework   Panasonic

R1-2301155         Discussion on general aspects of AI/ML framework   InterDigital, Inc.

R1-2301160         Discussion on AI/ML Framework   Rakuten Mobile, Inc

R1-2301177         General aspects of AI and ML framework for NR air interface NVIDIA

R1-2301198         General aspects of AI/ML framework           Lenovo

R1-2301220         General aspects of AI/ML Framework          AT&T

R1-2301254         General aspects of AI ML framework and evaluation methodology        Samsung

R1-2301336         Discussion on general aspect of AI/ML framework    Apple

R1-2301403         General aspects of AI/ML framework           Qualcomm Incorporated

R1-2301484         Discussion on general aspects of AI/ML framework   NTT DOCOMO, INC.

R1-2301586         Discussion on general aspects of AI/ML LCM             MediaTek Inc.

R1-2301663         Discussions on Common Aspects of AI/ML Framework           TCL Communication Ltd.

R1-2301664         Identifying Procedures for General Aspects of AI/ML Frameworks        Indian Institute of Tech (M), CEWiT, IIT Kanpur

 

R1-2301863        Summary#1 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

From Monday session

Agreement

To facilitate the discussion, consider at least the following Cases for model delivery/transfer to UE, training location, and model delivery/transfer format combinations for UE-side models and UE-part of two-sided models.

 

Case

Model delivery/transfer

Model storage location

Training location

y

model delivery (if needed) over-the-top

Outside 3gpp Network

UE-side / NW-side / neutral site

z1

model transfer in proprietary format

3GPP Network

UE-side / neutral site

z2

model transfer in proprietary format

3GPP Network

NW-side

z3

model transfer in open format

3GPP Network

UE-side / neutral site

z4

model transfer in open format of a known model structure at UE

3GPP Network

NW-side

z5

model transfer in open format of an unknown model structure at UE

3GPP Network

NW-side

 

Note: The Case definition is only for the purpose of facilitating discussion and does not imply applicability, feasibility, entity mapping, architecture, signalling nor any prioritization.

Note: The Case definition is NOT intended to introduce sub-levels of Level z.

Note: Other cases may be included further upon interest from companies.

FFS: Z4 and Z5 boundary

 

 

R1-2301864        Summary#2 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

Presented in Tuesday session

 

R1-2301865        Summary#3 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

From Wednesday session

Agreement

For UE-side models and UE-part of two-sided models:

FFS: Relationship between functionality identification and model identification

FFS: Performance monitoring and RAN4 impact

FFS: detailed understanding on model

 

Agreement

·        AI/ML-enabled Feature refers to a Feature where AI/ML may be used.

Agreement

·        For functionality identification, there may be either one or more than one Functionalities defined within an AI/ML-enabled feature.

Agreement

For 3GPP AI/ML for PHY SI discussion, when companies report model complexity, the complexity shall be reported in terms of “number of real-value model parameters” and “number of real-value operations” regardless of underlying model arithmetic.

 

 

Final summary in R1-2301868        Final Summary of General Aspects of AI/ML Framework               Moderator (Qualcomm)

9.2.2        AI/ML for CSI feedback enhancement

9.2.2.1       Evaluation on AI/ML for CSI feedback enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2300044         Discussion and evaluation of AI/ML for CSI feedback enhancement               FUTUREWEI

R1-2300108         Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2300154         Evaluations of AI-CSI       Ericsson

R1-2300171         Evaluation on AI CSI feedback enhancement              ZTE

R1-2300211         Discussion on evaluation on AI/ML for CSI feedback enhancement       Spreadtrum Communications, BUPT

R1-2300280         Evaluation methodology and results on AI/ML for CSI feedback enhancement               OPPO

R1-2300348         Evaluation on AI ML for CSI feedback enhancement Mavenir

R1-2300397         On Evaluation of AI/ML based CSI Google

R1-2300444         Evaluation on AI/ML for CSI feedback enhancement vivo

R1-2300501         Evaluation of AI/ML based methods for CSI feedback enhancement      Fraunhofer IIS, Fraunhofer HHI

R1-2300530         Evaluation on AI/ML for CSI feedback enhancement LG Electronics

R1-2300567         Discussion on evaluation on AI/ML for CSI feedback enhancement       xiaomi

R1-2300604         Evaluation of ML for CSI feedback enhancement       Nokia, Nokia Shanghai Bell

R1-2300671         Evaluation on AI/ML for CSI feedback enhancement CATT

R1-2300716         Evaluation on AI/ML for CSI feedback enhancement China Telecom

R1-2300744         Evaluation on AI/ML for CSI feedback enhancement Fujitsu

R1-2300841         Some discussions on evaluation on AI-ML for CSI feedback   CAICT

R1-2300941         Evaluation for CSI feedback enhancements  Intel Corporation

R1-2300990         Discussion on evaluation on AI/ML for CSI feedback enhancement       CMCC

R1-2301031         Evaluation on AI/ML for CSI feedback enhancement Indian Institute of Tech (H)

R1-2301041         Evaluation on AI/ML for CSI feedback enhancement ETRI

R1-2301097         Evaluation of joint CSI estimation and compression with AI/ML            BJTU

R1-2301156         Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2301178         Evaluation of AI and ML for CSI feedback enhancement         NVIDIA

R1-2301199         Evaluation on AI/ML for CSI feedback         Lenovo

R1-2301223         Discussion on AI/ML for CSI feedback enhancement AT&T

R1-2301255         Evaluation on AI/ML for CSI feedback enhancement Samsung

R1-2301337         Evaluation for AI/ML based CSI feedback enhancement          Apple

R1-2301404         Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated

R1-2301466         Evaluation of AI/ML based methods for CSI feedback enhancement      SEU               (Late submission)

R1-2301485         Discussion on evaluation on AI/ML for CSI feedback enhancement       NTT DOCOMO, INC.

R1-2301587         Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.

R1-2301666         Discussion on AI/ML based CSI Feedback Enhancement         Indian Institute of Tech (M), CEWiT, IIT Kanpur

R1-2301805         Evaluation of AI and ML for CSI feedback enhancement         CEWiT  (rev of R1-2301688)

 

R1-2301936        Summary#1 for CSI evaluation of [112-R18-AI/ML]            Moderator (Huawei)

From Monday session

Conclusion

For the evaluation of the AI/ML based CSI feedback enhancement, if the SGCS is adopted as the intermediate KPI as part of the ‘Evaluation Metric’ for rank>1 cases, except for Method 3 which has been supported, There is no consensus on whether to adopt an additional method.

 

Agreement

Confirm the following working assumption of RAN1#110bis-e:

Working assumption

In the evaluation of the AI/ML based CSI feedback enhancement, if SGCS is adopted as the intermediate KPI for the rank>1 situation, companies to ensure the correct calculation of SGCS and to avoid disorder issue of the output eigenvectors

·          Note: Eventual KPI can still be used to compare the performance

 

Conclusion

For the intermediate KPI for evaluating the accuracy of the AI/ML output CSI, except for SGCS and NMSE which have been agreed as the baseline metrics, for whether/how to introduce an additional intermediate KPI, NO additional intermediate KPI is adopted as mandatory.

·        It is up to companies to optionally report other intermediate KPIs, e.g., Relative achievable rate (RAR)

Agreement

For the evaluation of CSI enhancements, companies can optionally provide the additional throughput baseline based on CSI without compression (e.g., eigenvector from measured channel), which is taken as an upper bound for performance comparison.

 

 

R1-2301937        Summary#2 for CSI evaluation of [112-R18-AI/ML]            Moderator (Huawei)

From Tuesday session

Agreement

·        Confirm the following WA on the benchmark for CSI prediction achieved in RAN1#111:

Working Assumption

For the AI/ML based CSI prediction sub use case, the nearest historical CSI w/o prediction as well as non-AI/ML/collaboration level x AI/ML based CSI prediction approach are both taken as baselines for the benchmark of performance comparison, and the specific non-AI/ML/collaboration level x AI/ML based CSI prediction is reported by companies.

·        Note: the specific non-AI/ML based CSI prediction is compatible with R18 MIMO; collaboration level x AI/ML based CSI prediction could be implementation based AI/ML compatible with R18 MIMO as an example

o   It does not imply any restriction on future specification for CSI prediction

·        FFS how to model the simulation cases for collaboration level x CSI prediction and LCM for collaboration level y/z CSI prediction

 

Agreement

The CSI prediction-specific generalization scenario of various UE speeds (e.g., 10km/h, 30km/h, 60km/h, 120km/h, etc.) is added to the list of scenarios for performing the generalization verification.

·        FFS various frequency PRBs (e.g., trained based on one set of PRBs, inference on the same/different set of PRBs)

Agreement

For how to separate the templates for different training types/cases for AI/ML-based CSI compression without generalization/scalability verification, the following is considered:

·        The determined template in the RAN1#111 working assumption is entitled with “1-on-1 joint training”

·        A second separate template is introduced to capture the evaluation results for “multi-vendor joint training”

o   Note: this table captures the results for the joint training cases of 1 NW part model to M>1 UE part models, N>1 NW part models to 1 UE part model, or N>1 NW part models to M>1 UE part models. An example is multi-vendor Type 2 training.

·        A third separate template is introduced to capture the evaluation results for “separate training”

·        FFS: additional KPIs for each template, e.g., overhead, latency, etc.

Agreement

For the evaluation of training Type 3 under CSI compression, besides the 3 cases considered for multi-vendors, add one new Case (1-on-1 training with joint training) as benchmark/upper bound for performance comparison.

·        FFS the relationship between the pair(s) of models for Type 3 and the pair(s) of models for new Case

 

 

R1-2301938        Summary#3 for CSI evaluation of [112-R18-AI/ML]            Moderator (Huawei)

From Wednesday session

Agreement

For the evaluation of the AI/ML based CSI compression sub use cases with rank >=1, companies to report the specific option adopted for AI/ML model settings to adapt to ranks/layers.

 

Agreement

The CSI feedback overhead is calculated as the weighted average of CSI payload per rank and the distribution of ranks reported by the UE.

 

Working Assumption

For the initial template for AI/ML-based CSI compression without generalization/scalability verification achieved in the working assumption in the RAN1#111 meeting, X, Y and Z are determined as:

·        X is <=80bits

·        Y is 100bits-140bits

·        Z is  >=230bits

Working Assumption

X, Y and Z are applicable for per layer

 

 

R1-2301939        Summary#4 for CSI evaluation of [112-R18-AI/ML]            Moderator (Huawei)

From Friday session

Working assumption

The following initial template is considered to replace the template achieved in the working assumption in the RAN1#111 meeting, for companies to report the evaluation results of AI/ML-based CSI compression of 1-on-1 joint training without generalization/scalability verification

Table X. Evaluation results for CSI compression of 1-on-1 joint training without model generalization/scalability, [traffic type], [Max rank value], [RU]

 

 

Source 1

 

CSI generation part

AI/ML model backbone

 

 

 

Pre-processing

 

 

 

Post-processing

 

 

 

FLOPs/M

 

 

 

Number of parameters/M

 

 

 

[Storage /Mbytes]

 

 

 

CSI reconstruction part

AI/ML model backbone

 

 

 

[Pre-processing]

 

 

 

[Post-processing]

 

 

 

FLOPs/M

 

 

 

Number of parameters/M

 

 

 

[Storage /Mbytes]

 

 

 

Common description

Input type

 

 

 

Output type

 

 

 

Quantization /dequantization method

 

 

 

Rank/layer adaptation settings for rank>1

 

 

 

Dataset description

Train/k

 

 

 

Test/k

 

 

 

Ground-truth CSI quantization method (including scalar/codebook based quantization, and the parameters)

 

 

 

Overhead reduction compared to Float32 if high resolution quantization of ground-truth CSI is applied

 

 

 

[Other assumptions/settings agreed to be reported]

 

 

 

Benchmark

 

 

 

Benchmark assumptions, e.g., CSI overhead calculation method (Optional)

 

 

 

SGCS of benchmark, [layer 1]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

SGCS of benchmark, [layer 2]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

Gain for SGCS, [layer 1]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

Gain for SGCS, [layer 2]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

(other layers)

 

 

 

 

NMSE of benchmark, [layer 1]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

NMSE of benchmark, [layer 2]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

Gain for NMSE, [layer 1]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

Gain for NMSE, [layer 2]

CSI feedback payload X

 

 

 

CSI feedback payload Y

 

 

 

CSI feedback payload Z

 

 

 

(other layers)

 

 

 

 

Other intermediate KPI (description/value) (optional)

 

 

 

Gain for other intermediate KPI (description/value) (optional)

 

 

 

Gain for Mean UPT (for a specific CSI feedback overhead)

[CSI feedback payload X*Max rank value]

 

 

 

[CSI feedback payload Y*Max rank value]

 

 

 

[CSI feedback payload Z*Max rank value]

 

 

 

Gain for 5% UPT

[CSI feedback payload X*Max rank value]

 

 

 

[CSI feedback payload Y*Max rank value]

 

 

 

[CSI feedback payload Z*Max rank value]

 

 

 

Gain for upper bound without CSI compression over Benchmark –Mean UPT (Optional)

[CSI feedback payload X*Max rank value]

 

 

 

[CSI feedback payload Y*Max rank value]

 

 

 

[CSI feedback payload Z*Max rank value]

 

 

 

Gain for upper bound without CSI compression over Benchmark –5% UPT (Optional)

[CSI feedback payload X*Max rank value]

 

 

 

[CSI feedback payload Y*Max rank value]

 

 

 

[CSI feedback payload Z*Max rank value]

 

 

 

[CSI feedback reduction (%)]

 

 

 

 

 

 

 

FFS others

 

 

 

 

Note: “Benchmark” means the type of Legacy CB used for comparison.

Note: “Quantization/dequantization method” includes the description of training awareness (Case 1/2-1/2-2), type of quantization/dequantization (SQ/VQ), etc.

Note: “Input type” means the input of the CSI generation part. “output type” means the output of the CSI reconstruction part.

 

Working assumption

A separate table to capture the evaluation results of generalization/scalability verification for AI/ML-based CSI compression is given in the following initial template

·        To be collected before 112bis-e meeting

·        FFS whether the intermediate KPI results are gain over benchmark or absolute values

·        FFS whether the intermediate KPI results are in forms of linear or dB

Table X. Evaluation results for CSI compression with model generalization/scalability, [Max rank value], [Scenario/configuration]

 

 

Source 1

CSI generation part

AL/ML model backbone

 

 

Pre-processing

 

 

Post-processing

 

 

FLOPs/M

 

 

Number of parameters/M

 

 

[Storage /Mbytes]

 

 

CSI reconstruction part

AL/ML model backbone

 

 

[Pre-processing]

 

 

[Post-processing]

 

 

FLOPs/M

 

 

Number of parameters/M

 

 

[Storage /Mbytes]

 

 

Common description

Input type

 

 

Output type

 

 

Quantization /dequantization method

 

 

Generalization/Scalability method description if applicable, e.g., truncation, adaptation layer, etc.

 

 

Input/output scalability dimension if applicable, e.g., N>=1 NW part model(s) to M>=1 UE part model(s)

 

 

Dataset description

Ground-truth CSI quantization method

 

 

[Other assumptions/settings agreed to be reported]

 

 

Generalization Case 1

Train (setting#A, size/k)

 

 

Test (setting#A, size/k)

 

 

SGCS, layer 1

CSI feedback payload X

 

 

CSI feedback payload Y

 

 

CSI feedback payload Z

 

 

SGCS, layer 2

CSI feedback payload X

 

 

CSI feedback payload Y

 

 

CSI feedback payload Z

 

 

NMSE, layer 1

CSI feedback payload X

 

 

CSI feedback payload Y

 

 

CSI feedback payload Z

 

 

NMSE, layer 2

CSI feedback payload X

 

 

CSI feedback payload Y

 

 

CSI feedback payload Z

 

 

(other settings for Case 1)

 

 

 

 

 

 

Generalization Case 2

Train (setting#A, size/k)

 

 

Test (setting#B, size/k)

 

 

(results for Case 2)

 

 

 

(other settings for Case 2)

 

 

 

Generalization Case 3

Train (setting#A+#B, size/k)

 

 

Test (setting#A/#B, size/k)

 

 

(results for Case 3)

 

 

 

(other settings for Case 3)

 

 

 

Fine-tuning case (optional)

Train (setting#A, size/k)

 

 

Fine-tune (setting#B, size/k)

 

 

Test (setting#B, size/k)

 

 

(results for Fine-tuning)

 

 

 

(other settings for Fine-tuning)

 

 

 

FFS others

 

 

 

Note: “Quantization/dequantization method” includes the description of training awareness (Case 1/2-1/2-2), type of quantization/dequantization (SQ/VQ), etc.

Note: “Input type” means the input of the CSI generation part. “output type” means the output of the CSI reconstruction part.

 

Working Assumption

The following initial template is considered for companies to report the evaluation results of AI/ML-based CSI prediction with generalization verification

·        To be collected before 112bis-e meeting

·        FFS whether the intermediate KPI results are gain over benchmark or absolute values

·        FFS whether the intermediate KPI results are in forms of linear or dB

Table X. Evaluation results for CSI prediction with model generalization, [Max rank value]

 

 

Source 1

AI/ML model description

AL/ML model description (e.g., backbone, structure)

 

 

[Pre-processing]

 

 

[Post-processing]

 

 

FLOPs/M

 

 

Parameters/M

 

 

[Storage /Mbytes]

 

 

Input type

 

 

Output type

 

 

Assumption

CSI feedback periodicity

 

 

Observation window (number/distance)

 

 

Prediction window (number/distance between prediction instances/distance from the last observation instance to the 1st prediction instance)

 

 

Whether/how to adopt spatial consistency

 

 

Generalization Case 1

Train (setting#A, size/k)

 

 

Test (setting#A, size/k)

 

 

SGCS (1,…N, N is number of prediction instances)

 

 

NMSE (1,…N, N is number of prediction instances)

 

 

(other settings and results for Case 1)

 

 

 

Generalization Case 2

Train (setting#A, size/k)

 

 

Test (setting#B, size/k)

 

 

SGCS (1,…N, N is number of prediction instances)

 

 

NMSE (1,…N, N is number of prediction instances)

 

 

(other settings and results for Case 2)

 

 

 

Generalization Case 3

Train (setting#A+#B, size/k)

 

 

Test (setting#A/#B, size/k)

 

 

SGCS (1,…N, N is number of prediction instances)

 

 

NMSE (1,…N, N is number of prediction instances)

 

 

(other settings and results for Case 3)

 

 

 

Fine-tuning case (optional)

Train (setting#A, size/k)

 

 

Fine-tune (setting#B, size/k)

 

 

Test (setting#B, size/k)

 

 

SGCS (1,…N, N is number of prediction instances)

 

 

NMSE (1,…N, N is number of prediction instances)

 

 

(other settings and results for Fine-tuning)

 

 

 

FFS others

 

 

 

 

Working Assumption

The following initial template is considered for companies to report the evaluation results of AI/ML-based CSI compression for multi-vendor joint training and without generalization/scalability verification

·        To be collected before 112bis-e meeting

·        FFS whether the intermediate KPI results are gain over benchmark or absolute values

·        FFS whether the intermediate KPI results are in forms of linear or dB

·        FFS case of multiple layers

Table X. Evaluation results for CSI compression of multi-vendor joint training without model generalization/scalability, [Max rank value]

 

 

Source 1

Common description

Input type

 

 

Output type

 

 

[Training method]

Quantization /dequantization method

 

 

Dataset description

Train/k

 

 

Test/k

 

 

Ground-truth CSI quantization method

 

 

Case 1 (baseline): NW#1-UE#1

UE part AI/ML model backbone/structure

 

 

Network part AI/ML model backbone/structure

 

 

...

(other NW-UE combinations for Case 1)

 

 

 

Case 2 (1 NW part to M>1 UE parts)

NW part model backbone/structure

 

 

UE#1 part model backbone/structure

 

 

UE#1 part training dataset description and size

 

 

 

 

UE#M part model backbone/structure

 

 

UE#M part training dataset description and size

 

 

Case 3 (N>1 NW parts to 1 UE part)

UE part model backbone/structure

 

 

NW#1 part model backbone/structure

 

 

NW#1 part training dataset description and size

 

 

 

 

NW#N part model backbone/structure

 

 

NW#N part training dataset description and size

 

 

Intermediate KPI type (SGCS/NMSE)

 

 

FFS other cases

 

 

 

Case 1: NW#1-UE#1: Intermediate KPI

CSI feedback payload X

 

 

CSI feedback payload Y

 

 

CSI feedback payload Z

 

 

(results for other NW-UE combinations for Case 1)

 

 

 

Case 2: Intermediate KPI

CSI feedback payload X,

NW-UE#1

 

 

 

 

CSI feedback payload X,

NW-UE#M

 

 

CSI feedback payload Y …

 

 

CSI feedback payload Z …

 

 

Case 3: Intermediate KPI

CSI feedback payload X,

NW#1-UE

 

 

 

 

CSI feedback payload X,

NW#N-UE

 

 

CSI feedback payload Y …

 

 

CSI feedback payload Z …

 

 

FFS other cases

 

 

 

FFS others

 

 

 

Note: “Quantization/dequantization method” includes the description of training awareness (Case 1/2-1/2-2), type of quantization/dequantization (SQ/VQ), etc.

Note: “Input type” means the input of the CSI generation part. “output type” means the output of the CSI reconstruction part.

 

Working Assumption

The following initial template is considered for companies to report the evaluation results of AI/ML-based CSI compression for sequentially separate training and without generalization/scalability verification

·        To be collected before 112bis-e meeting

·        FFS whether the intermediate KPI results are gain over benchmark or absolute values

·        FFS whether the intermediate KPI results are in forms of linear or dB

·        FFS case of multiple layers

Table X. Evaluation results for CSI compression of separate training without model generalization/scalability, [Max rank value]

 

 

Source 1

Common description

Input type

 

 

Output type

 

 

Quantization /dequantization method

 

 

Shared output of CSI generation part/input of reconstruction part is before or after quantization

 

 

Dataset description

Test/k

 

 

Ground-truth CSI quantization method

 

 

[Benchmark: NW#1-UE#1 joint training]

UE part AI/ML model backbone/structure

 

 

Network part AI/ML model backbone/structure

 

 

Training dataset size

 

 

...

(other NW-UE combinations for benchmark)

 

 

 

Case 1-NW first training

NW part AI/ML model backbone/structure

 

 

UE#1 part model backbone/structure

 

 

UE#1 part training dataset description and size

 

 

 

 

UE#M part model backbone/structure

 

 

UE#M part training dataset description and size

 

 

[air-interface overhead of information (e.g., dataset) sharing]

 

 

Case 1-UE first training

NW#1 part model backbone/structure

 

 

NW#1 part training dataset description and size

 

 

 

 

NW#N part model backbone/structure

 

 

NW#N part training dataset description and size

 

 

UE part model backbone/structure

 

 

[air-interface overhead of information (e.g., dataset) sharing]

 

 

Case 2-UE first training

UE#1 part model backbone/structure

 

 

 

 

UE#M part model backbone/structure

 

 

UE part AI/ML model backbone/structure

 

 

NW part training dataset description and size (e.g., description/size of dataset from M UEs and how to merge)

 

 

Case 3-NW first training

NW#1 part model backbone/structure

 

 

 

 

NW#N part model backbone/structure

 

 

UE part model backbone/structure

 

 

UE part training dataset description and size (e.g., description/size of dataset from N NWs and how to merge)

 

 

Intermediate KPI type (SGCS/NMSE)

 

 

FFS other cases

 

 

 

NW#1-UE#1 joint training: Intermediate KPI

CSI feedback payload X

 

 

CSI feedback payload Y

 

 

CSI feedback payload Z

 

 

(results for other 1-on-1 NW-UE joint training combinations)

 

 

 

Case 1-NW first training: Intermediate KPI

CSI feedback payload X,

NW-UE#1

 

 

 

 

CSI feedback payload X,

NW-UE#M

 

 

CSI feedback payload Y …

 

 

CSI feedback payload Z …

 

 

Case 1-UE first training: Intermediate KPI

CSI feedback payload X,

NW#1-UE

 

 

 

 

CSI feedback payload X,

NW#N-UE

 

 

CSI feedback payload Y …

 

 

CSI feedback payload Z …

 

 

Case 2-NW first training: Intermediate KPI

CSI feedback payload X,

NW#1-UE

 

 

 

 

CSI feedback payload X,

NW#N-UE

 

 

CSI feedback payload Y …

 

 

CSI feedback payload Z …

 

 

Case 3-NW first training: Intermediate KPI

CSI feedback payload X,

NW-UE#1

 

 

 

 

CSI feedback payload X,

NW-UE#M

 

 

CSI feedback payload Y …

 

 

CSI feedback payload Z …

 

 

FFS other cases

 

 

 

FFS others

 

 

 

Note: “Quantization/dequantization method” includes the description of training awareness (Case 1/2-1/2-2), type of quantization/dequantization (SQ/VQ), etc.

Note: “Input type” means the input of the CSI generation part. “output type” means the output of the CSI reconstruction part.

 

 

Final summary in R1-2301940.

9.2.2.2       Other aspects on AI/ML for CSI feedback enhancement

Including potential specification impact.

 

R1-2300045         Discussion on other aspects of AI/ML for CSI feedback enhancement               FUTUREWEI

R1-2300071         Further discussions of AI/ML for CSI feedback enhancement  Keysight Technologies UK Ltd, Universidad de Málaga

R1-2300109         Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2300153         Discussion on AI-CSI        Ericsson

R1-2300172         Discussion on other aspects for AI CSI feedback enhancement ZTE

R1-2300212         Discussion on other aspects on AI/ML for CSI feedback           Spreadtrum Communications

R1-2300281         On sub use cases and other aspects of AI/ML for CSI feedback enhancement               OPPO

R1-2300398         On Enhancement of AI/ML based CSI           Google

R1-2300445         Other aspects on AI/ML for CSI feedback enhancement           vivo

R1-2300531         Other aspects on AI/ML for CSI feedback enhancement           LG Electronics

R1-2300568         Discussion on potential specification impact for CSI feedback based on AI/ML               xiaomi

R1-2300605         Other aspects on ML for CSI feedback enhancement  Nokia, Nokia Shanghai Bell

R1-2300672         Potential specification impact on AI/ML for CSI feedback enhancement               CATT

R1-2300717         Discussion on AI/ML for CSI feedback enhancement China Telecom

R1-2300745         Views on specification impact for CSI feedback enhancement Fujitsu

R1-2300767         Discussion on AI/ML for CSI feedback enhancement NEC

R1-2300842         Discussions on AI-ML for CSI feedback       CAICT

R1-2300863         Discussion on AI/ML for CSI feedback enhancement Panasonic

R1-2300869         Considerations on CSI measurement enhancements via AI/ML Sony

R1-2300942         On other aspects on AI/ML for CSI feedback              Intel Corporation

R1-2300991         Discussion on other aspects on AI/ML for CSI feedback enhancement  CMCC

R1-2301042         Discussion on other aspects on AI/ML for CSI feedback enhancement  ETRI

R1-2301098         Joint CSI estimation and compression with AI/ML     BJTU

R1-2301157         Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2301179         AI and ML for CSI feedback enhancement   NVIDIA

R1-2301200         Further aspects of AI/ML for CSI feedback  Lenovo

R1-2301224         Discussion on AI/ML for CSI feedback enhancement AT&T

R1-2301256         Representative sub use cases for CSI feedback enhancement    Samsung

R1-2301313         Discussion on AI/ML for CSI Feedback Enhancement              III

R1-2301338         Discussion on other aspects of AI/ML for CSI enhancement    Apple

R1-2301405         Other aspects on AI/ML for CSI feedback enhancement           Qualcomm Incorporated

R1-2301486         Discussion on other aspects on AI/ML for CSI feedback enhancement  NTT DOCOMO, INC.

R1-2301588         Other aspects on AI/ML for CSI feedback enhancement           MediaTek Inc.

R1-2301665         Discussions on Sub-Use Cases in AI/ML for CSI Feedback Enhancement            TCL Communication Ltd.

 

R1-2301910        Summary #1 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From Monday session

Agreement

In CSI compression using two-sided model use case, further study potential specification impact of the following output-CSI-UE and input-CSI-NW at least for Option 1:

·        Option 1: Precoding matrix

o   1a: The precoding matrix in spatial-frequency domain

o   1b: The precoding matrix represented using angular-delay domain projection

·        Option 2: Explicit channel matrix (i.e., full Tx * Rx MIMO channel)

o   2a: raw channel is in spatial-frequency domain

o   2b: raw channel is in angular-delay domain

·        Note: Whether Option 2 is also studied depends on the performance evaluations in 9.2.2.1.

·        Note: RI and CQI will be discussed separately

 

 

R1-2301911        Summary #2 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From Tuesday session

Agreement

In CSI compression using two-sided model use case, further study the following options for CQI determination in CSI report, if CQI in CSI report is configured.   

 

Conclusion

In CSI compression using two-sided model use case, further discuss the pros/cons of different offline training collaboration types including at least the following aspects:

·        Whether model can be kept proprietary

·        Requirements on privacy-sensitive dataset sharing

·        Flexibility to support cell/site/scenario/configuration specific model

·        gNB/device specific optimization – i.e., whether hardware-specific optimization of the model is possible, e.g. compilation for the specific hardware

·        Model update flexibility after deployment

·        feasibility of allowing UE side and NW side to develop/update models separately

·        Model performance based on evaluation in 9.2.2.1

·        Whether gNB can maintain/store a single/unified model

·        Whether UE device can maintain/store a single/unified model

·        Extendability: to train new UE-side model compatible with NW-side model in use; Or to train new NW-side model compatible with UE-side model in use

·        Whether training data distribution can be matched to the device that will use the model for inference

·        Whether device capability can be considered for model development

·        Other aspects are not precluded

·        Note: training data collection and dataset/model delivery will be discussed separately

 

 

R1-2301912        Summary #3 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From Wednesday session

Agreement

 

 

R1-2301913        Summary #4 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From Friday session

Agreement

In CSI compression using two-sided model use case, further study the following aspects for CSI configuration and report:

 

Agreement

In CSI compression using two-sided model use case, further study the feasibility and methods to support the legacy CSI reporting principles including at least:

 

Agreement

In CSI compression using two-sided model use case, further study the necessity, feasibility, and potential specification impact for intermediate KPIs based monitoring including at least:

·       UE-side monitoring based on the output of the CSI reconstruction model, subject to the aligned format, associated to the CSI report, indicated by the NW or obtained from the network side.

o   Network may configure a threshold criterion to facilitate UE to perform model monitoring.

·       UE-side monitoring based on the output of the CSI reconstruction model at the UE-side

o   Note: CSI reconstruction model at the UE-side can be the same or different comparing to the actual CSI reconstruction model used at the NW-side.

o   Network may configure a threshold criterion to facilitate UE to perform model monitoring.

·       FFS: Other solutions, e.g., UE-side uses a model that directly outputs intermediate KPI. Network-side monitoring based on target CSI measured via SRS from the UE.

Note: Monitoring approaches not based on intermediate KPI are not precluded

Note: the study of intermediate KPIs based monitoring should take into account the monitoring reliability (accuracy), overhead, complexity, and latency.

9.2.3        AI/ML for beam management

9.2.3.1       Evaluation on AI/ML for beam management

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2300046         Discussion and evaluation of AI/ML for beam management     FUTUREWEI

R1-2300110         Evaluation on AI/ML for beam management Huawei, HiSilicon

R1-2300173         Evaluation on AI beam management             ZTE

R1-2300179         Evaluations of AIML for beam management Ericsson

R1-2300213         Evaluation on AI/ML for beam management Spreadtrum Communications

R1-2300282         Evaluation methodology and results on AI/ML for beam management   OPPO

R1-2300399         On Evaluation of AI/ML based Beam Management    Google

R1-2300446         Evaluation on AI/ML for beam management vivo

R1-2300532         Evaluation on AI/ML for beam management LG Electronics

R1-2300569         Evaluation on AI/ML for beam management xiaomi

R1-2300593         Discussion for evaluation on AI/ML for beam management     InterDigital, Inc.

R1-2300606         Evaluation of ML for beam management      Nokia, Nokia Shanghai Bell

R1-2300673         Evaluation on AI/ML for beam management CATT

R1-2300718         Evaluation on AI/ML for beam management China Telecom

R1-2300746         Evaluation on AI/ML for beam management Fujitsu

R1-2300843         Some discussions on evaluation on AI-ML for Beam management         CAICT

R1-2300943         Evaluations for AI/ML beam management   Intel Corporation

R1-2300992         Discussion on evaluation on AI/ML for beam management      CMCC

R1-2301180         Evaluation of AI and ML for beam management         NVIDIA

R1-2301201         Evaluation on AI/ML for beam management Lenovo

R1-2301257         Evaluation on AI/ML for Beam management              Samsung

R1-2301339         Evaluation for AI/ML based beam management enhancements Apple

R1-2301406         Evaluation on AI/ML for beam management Qualcomm Incorporated

R1-2301487         Discussion on evaluation on AI/ML for beam management      NTT DOCOMO, INC.

R1-2301589         Evaluation on AI/ML for beam management MediaTek Inc.

R1-2301689         Evaluation on AI/ML for beam management CEWiT

 

R1-2301956        Feature lead summary #1 evaluation of AI/ML for beam management               Moderator (Samsung)

From Monday session

Agreement

 

Agreement

 

 

R1-2301957        Feature lead summary #2 evaluation of AI/ML for beam management               Moderator (Samsung)

From Tuesday session

Agreement

o    Option A (baseline): the Top-1 genie-aided Tx beam is the Tx beam that results in the largest L1-RSRP over all Tx and Rx beams

o    Option B(optional), the Top-1 genie-aided Tx beam is the Tx beam that results in the largest L1-RSRP over all Tx beams with specific Rx beam(s)

§  FFS on specific Rx beam(s)

§  Note: specific Rx beams are subset of all Rx beams

 

Agreement

·        For AI/ML models, which provide L1-RSRP as the model output, to evaluate the accuracy of predicted L1-RSRP, companies optionally report average (absolute value)/CDF of the predicted L1-RSRP difference, where the predicted L1-RSRP difference is defined as:

o   The difference between the predicted L1-RSRP of Top-1[/K] predicted beam and the ideal L1-RSRP of the same beam.

 

R1-2301958        Feature lead summary #3 evaluation of AI/ML for beam management               Moderator (Samsung)

From Thursday session

Agreement

 

Agreement

·        Additionally study the following option on the selection of Set B of beams (pairs) (for Option 2: Set B is variable)

 

 

Final summary in R1-2301959.

9.2.3.2       Other aspects on AI/ML for beam management

Including potential specification impact.

 

R1-2300047         Discussion on other aspects of AI/ML for beam management  FUTUREWEI

R1-2300111         Discussion on AI/ML for beam management Huawei, HiSilicon

R1-2300174         Discussion on other aspects for AI beam management              ZTE

R1-2300180         Discussion on AIML for beam management Ericsson

R1-2300195         Discussion on other aspects of AI/ML beam management        New H3C Technologies Co., Ltd.

R1-2300214         Other aspects on AI/ML for beam management          Spreadtrum Communications

R1-2300283         Other aspects of AI/ML for beam management           OPPO

R1-2300400         On Enhancement of AI/ML based Beam Management              Google

R1-2300447         Other aspects on AI/ML for beam management          vivo

R1-2300533         Other aspects on AI/ML for beam management          LG Electronics

R1-2300570         Potential specification impact on AI/ML for beam management             xiaomi

R1-2300594         Discussion for other aspects on AI/ML for beam management InterDigital, Inc.

R1-2300607         Other aspects on ML for beam management Nokia, Nokia Shanghai Bell

R1-2300674         Potential specification impact on AI/ML for beam management             CATT

R1-2300747         Sub use cases and specification impact on AI/ML for beam management               Fujitsu

R1-2300824         Discussion on AI/ML for beam management NEC

R1-2300844         Discussions on AI-ML for Beam management            CAICT

R1-2300870         Consideration on AI/ML for beam management          Sony

R1-2300944         Other aspects on AI/ML for beam management          Intel Corporation

R1-2300993         Discussion on other aspects on AI/ML for beam management  CMCC

R1-2301043         Discussion on other aspects on AI/ML for beam management  ETRI

R1-2301181         AI and ML for beam management  NVIDIA

R1-2301197         Discussion on AI/ML for beam management Panasonic

R1-2301202         Further aspects of AI/ML for beam management        Lenovo

R1-2301258         Representative sub use cases for beam management   Samsung

R1-2301340         Discussion on other aspects of AI/ML for beam management  Apple

R1-2301407         Other aspects on AI/ML for beam management          Qualcomm Incorporated

R1-2301488         Discussion on other aspects on AI/ML for beam management  NTT DOCOMO, INC.

R1-2301539         Discussion on other aspects on AI/ML for beam management  KT Corp.

R1-2301590         Other aspects on AI/ML for beam management          MediaTek Inc.

R1-2301685         Discussions on Sub-Use Cases in AI/ML for Beam Management           TCL Communication Ltd.

 

R1-2301894        Summary#1 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Monday session

Conclusion

For the sub use case BM-Case1 and BM-Case2, “Alt.2: DL Rx beam prediction” is deprioritized.

 

Agreement

Regarding the performance metric(s) of AI/ML model monitoring for BM-Case1 and BM-Case2, study the following alternatives (including feasibility/necessity) with potential down-selection:

 

 

R1-2301895        Summary#2 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Tuesday session

Conclusion

Regarding the explicit assistance information from UE to network for NW-side AI/ML model, RAN1 has no consensus to support the following information

·        UE location

·        UE moving direction

·        UE Rx beam shape/direction

 

R1-2301896        Summary#3 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Thursday session

Agreement

For BM-Case1 and BM-Case2 with a UE-side AI/ML model, study the necessity, feasibility and the potential specification impact (if needed) of the following information reported from UE to network:

 

Agreement

For BM-Case1 and BM-Case2 with a UE-side AI/ML model, study potential specification impact of AI model inference from the following additional aspects on top of previous agreements:

 

Conclusion

Regarding the explicit assistance information from network to UE for UE-side AI/ML model, RAN1 has no consensus to support the following information

 

Agreement

For BM-Case1 and BM-Case2 with a UE-side AI/ML model, regarding NW-side performance monitoring, study the following aspects as a starting point including the study of necessity:

 

 

R1-2301897        Summary#4 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Friday session

Agreement

For BM-Case1 and BM-Case2 with a UE-side AI/ML model, regarding UE-side performance monitoring, study the following aspects as a starting point including the study of necessity and feasibility:

·        Indication/request/report from UE to gNB for performance monitoring

o   Note: The indication/request/report may be not needed in some case(s)

·        Configuration/Signaling from gNB to UE for performance monitoring

·        Other aspect(s) is not precluded

9.2.4        AI/ML for positioning accuracy enhancement

9.2.4.1       Evaluation on AI/ML for positioning accuracy enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2300112         Evaluation on AI/ML for positioning accuracy enhancement    Huawei, HiSilicon

R1-2300141         Evaluation of AI/ML for Positioning Accuracy Enhancement  Ericsson Inc.

R1-2300175         Evaluation on AI positioning enhancement   ZTE

R1-2300284         Evaluation methodology and results on AI/ML for positioning accuracy enhancement               OPPO

R1-2300401         On Evaluation of AI/ML based Positioning  Google

R1-2300448         Evaluation on AI/ML for positioning accuracy enhancement    vivo

R1-2300534         Evaluation on AI/ML for positioning accuracy enhancement    LG Electronics

R1-2300571         Evaluation on AI/ML for positioning accuracy enhancement    xiaomi

R1-2300608         Evaluation of ML for positioning accuracy enhancement          Nokia, Nokia Shanghai Bell

R1-2300675         Evaluation on AI/ML for positioning enhancement    CATT

R1-2300719         Evaluation on AI/ML for positioning accuracy enhancement    China Telecom

R1-2300748         Discussions on evaluation results of AIML positioning accuracy enhancement               Fujitsu

R1-2300845         Some discussions on evaluation on AI-ML for positioning accuracy enhancement               CAICT

R1-2300994         Discussion on evaluation on AI/ML for positioning accuracy enhancement               CMCC

R1-2301101         Evaluation on AI/ML for positioning accuracy enhancement    InterDigital, Inc.

R1-2301182         Evaluation of AI and ML for positioning enhancement             NVIDIA

R1-2301203         Discussion on AI/ML Positioning Evaluations            Lenovo

R1-2301259         Evaluation on AI/ML for Positioning            Samsung

R1-2301341         Evaluation on AI/ML for positioning accuracy enhancement    Apple

R1-2301408         Evaluation on AI/ML for positioning accuracy enhancement    Qualcomm Incorporated

R1-2301591         Evaluation of AIML for Positioning Accuracy Enhancement   MediaTek Inc.

R1-2301806         Evaluation on AI/ML for Positioning Accuracy Enhancement CEWiT  (rev of R1-2301690)

 

R1-2301946        Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Monday session

Agreement

For both direct AI/ML positioning and AI/ML assisted positioning, companies include the evaluation area in their reporting template, assuming the same evaluation area is used for training dataset and test dataset.

Note:

·        Baseline evaluation area for InF-DH = 120x60 m.

·        if different evaluation areas are used for training dataset and test dataset, they are marked out separately under “Train” and “Test” instead.

Table X. Evaluation results for AI/ML model deployed on [UE or network]-side, [with or without] model generalization, [short model description], UE distribution area = [e.g., 120x60 m, 100x40 m]

Model input

Model output

Label

Clutter param

Dataset size

AI/ML complexity

Horizontal positioning accuracy at CDF=90% (meters)

Train

Test

Model complexity

Computation complexity

AI/ML

 

 

 

 

 

 

 

 

 

 

Table X. Evaluation results for AI/ML model deployed on [UE or network]-side, [short model description], UE distribution area = [e.g., 120x60 m, 100x40 m]

Model input

Model output

Label

Settings (e.g., drops, clutter param, mix)

Dataset size

AI/ML complexity

Horizontal pos. accuracy at CDF=90% (m)

Train

Test

Train

Test

Model complexity

Computation complexity

AI/ML

 

 

 

 

 

 

 

 

 

 

 

Agreement

The agreement made in RAN1#110 AI 9.2.4.1 is updated by adding additional note:

Note: if complex value is used in modelling process, the number of the model parameters is doubled, which is also applicable for other AIs of AI/ML.

 

 

R1-2301947        Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Tuesday session

Agreement

For both the direct AI/ML positioning and AI/ML assisted positioning, study the model input, considering the tradeoff among model performance, model complexity and computational complexity.

·        The type of information to use as model input. The candidates include at least: time-domain CIR, PDP.

·        The dimension of model input in terms of NTRP, Nt, and Nt’.

·        Note: For the direct AI/ML positioning, model input size has impact to signaling overhead for model inference.

Agreement

For direct AI/ML positioning, study the performance of model monitoring methods, including:

·        Label based methods, where ground truth label (or its approximation) is provided for monitoring the accuracy of model output.

·        Label-free methods, where model monitoring does not require ground truth label (or its approximation).

Agreement

For AI/ML assisted approach, study the performance of label-free model monitoring methods, which do not require ground truth label (or its approximation) for model monitoring.

 

 

R1-2301948         Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

R1-2301949         Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

R1-2302169        Summary #5 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From Thursday session

Conclusion

·        No dedicated evaluation is needed for the positioning accuracy performance of model switching

·        It does not preclude future discussion on model switching related performance

Agreement

For direct AI/ML positioning, study the impact of labelling error to positioning accuracy 

·        The ground truth label error in each dimension of x-axis and y-axis can be modeled as a truncated Gaussian distribution with zero mean and standard deviation of L meters, with truncation of the distribution to the [-2*L, 2*L] range.

o   Value L is up to sources.

·        Other models are not precluded

·        [Whether/how to study the impact of labelling error to label-based model monitoring methods]

·        [Whether/how to study the impact of labelling error for AI/ML assisted positioning.]

Observation

Evaluation of the following generalization aspects show that the positioning accuracy of direct AI/ML positioning deteriorates when the AI/ML model is trained with dataset of one deployment scenario, while tested with dataset of a different deployment scenario.

Note: ideal model training and switching may provide the upper bound of achievable performance when the AI/ML model needs to handle different deployment scenarios.

 

 

Final summary in R1-2302170.

9.2.4.22       Other aspects on AI/ML for positioning accuracy enhancement

Including potential specification impact.

 

R1-2300113         Discussion on AI/ML for positioning accuracy enhancement   Huawei, HiSilicon

R1-2300142         Other Aspects of AI/ML Based Positioning Enhancement        Ericsson Inc.

R1-2300176         Discussion on other aspects for AI positioning enhancement    ZTE

R1-2300215         Discussion on other aspects on AI/ML for positioning accuracy enhancement               Spreadtrum Communications

R1-2300285         On sub use cases and other aspects of AI/ML for positioning accuracy enhancement               OPPO

R1-2300402         On Enhancement of AI/ML based Positioning             Google

R1-2300449         Other aspects on AI/ML for positioning accuracy enhancement              vivo

R1-2300535         Other aspects on AI/ML for positioning accuracy enhancement              LG Electronics

R1-2300572         Views on the other aspects of AI/ML-based positioning accuracy enhancement               xiaomi

R1-2300602         Other aspects on AI-ML for positioning accuracy enhancement              Baicells

R1-2300609         Other aspects on ML for positioning accuracy enhancement     Nokia, Nokia Shanghai Bell

R1-2300676         Potential specification impact on AI/ML for positioning enhancement   CATT

R1-2300749         Discussions on specification impacts for AIML positioning accuracy enhancement               Fujitsu

R1-2300831         Discussion on AI/ML for positioning accuracy enhancement   NEC

R1-2300846         Discussions on AI-ML for positioning accuracy enhancement CAICT

R1-2300871         On Other Aspects on AI/ML for Positioning Accuracy Enhancement     Sony

R1-2300995         Discussion on other aspects on AI/ML for positioning accuracy enhancement               CMCC

R1-2301115         Designs and potential specification impacts of AIML for positioning     InterDigital, Inc.

R1-2301140         On potential AI/ML solutions for positioning              Fraunhofer IIS, Fraunhofer  HHI

R1-2301183         AI and ML for positioning enhancement       NVIDIA

R1-2301204         AI/ML Positioning use cases and associated Impacts Lenovo

R1-2301260         Representative sub use cases for Positioning Samsung

R1-2301342         On Other aspects on AI/ML for positioning accuracy enhancement        Apple

R1-2301409         Other aspects on AI/ML for positioning accuracy enhancement              Qualcomm Incorporated

R1-2301489         Discussion on other aspects on AI/ML for positioning accuracy enhancement     NTT DOCOMO, INC.

R1-2301592         Other Aspects on AI ML Based Positioning Enhancement        MediaTek Inc.

R1-2301667         Contributions on AI/ML based Positioning Accuracy Enhancement       Indian Institute of Tech (M), CEWiT, IIT Kanpur

 

R1-2301847        FL summary #1 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

From Monday session

Agreement

Regarding training data generation for AI/ML based positioning,

 

 

R1-2301996        FL summary #2 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

From Tuesday session

Agreement

Regarding training data collection for AI/ML based positioning, study benefit(s) and potential specification impact (including necessity) at least for the following aspects

 

 

R1-2302019        FL summary #3 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

From Thursday session

Agreement

Regarding AI/ML model monitoring for AI/ML based positioning, to study and provide inputs on benefit(s), feasibility, necessity and potential specification impact for the following aspects

 

Agreement

Regarding AI/ML model inference, to study the potential specification impact (including the feasibility, and the necessity of specifying AI/ML model input and/or output) at least for the following aspects for AI/ML based positioning accuracy enhancement

 

Note: Companies are encouraged to report their assumption of functionality and their assumption of information element(s) of AI/ML functionality identification for AI/ML based positioning with UE-side model (Case 1 and 2a).


 RAN1#112-bis-e

9.2       Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface

Please refer to RP-221348 for detailed scope of the SI.

 

R1-2304168        Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface)            Ad-hoc Chair (CMCC)

 

R1-2303580         Technical report for Rel-18 SI on AI and ML for NR air interface          Qualcomm Incorporated

R1-2304148         TR38.843 v0.1.0: Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface  Rapporteur (Qualcomm)

Note: This TR for the SI on AI/ML for NR air interface captures all the RAN1 agreements made until RAN1#112. Not formally endorsed; for RAN1 review and comments. New version of the TR to be prepared to capturing the agreements from this meeting in input to RAN1#113.

9.2.1        General aspects of AI/ML framework

Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.

 

R1-2302318         Discussion on common AI/ML characteristics and operations  FUTUREWEI

R1-2302357         Discussion on general aspects of AI/ML framework   Huawei, HiSilicon

R1-2302436         Discussion on general aspects of common AI PHY framework ZTE

R1-2302476         Discussions on AI/ML framework  vivo

R1-2302539         On general aspects of AI/ML framework      OPPO

R1-2302592         Discussion on general aspects of AIML framework    Spreadtrum Communications

R1-2302627         Further discussion on the general aspects of ML for Air-interface          Nokia, Nokia Shanghai Bell

R1-2302694         Discussion on AI/ML framework for NR air interface CATT

R1-2302789         General aspects of AI/ML framework for NR air interface       Intel Corporation

R1-2302821         Discussion on general aspects of AI/ML framework   InterDigital, Inc.

R1-2302841         Considerations on common AI/ML framework           Sony

R1-2302877         Discussion on general aspects of AIML framework    Ericsson

R1-2302903         Discussion on general aspects of AI/ML framework   Fujitsu

R1-2302974         Views on the general aspects of AI/ML framework    xiaomi

R1-2303041         Discussion on general aspects of AI/ML framework   Panasonic

R1-2303049         On General Aspects of AI/ML Framework   Google

R1-2303075         General aspects on AI/ML framework           LG Electronics

R1-2303119         General aspects of AI ML framework and evaluation methodogy           Samsung

R1-2303182         Considerations on general aspects on AI-ML framework          CAICT

R1-2303193         Discussion on general aspects of AI/ML framework for NR air interface               ETRI

R1-2303223         Discussion on general aspects of AI/ML framework   CMCC

R1-2303335         Discussion on general aspects of AI/ML LCM             MediaTek Inc.

R1-2303412         General aspects of AI/ML framework           Fraunhofer IIS, Fraunhofer HHI

R1-2303434         General aspects of AI and ML framework for NR air interface NVIDIA

R1-2303474         Discussion on general aspect of AI/ML framework    Apple

R1-2303523         General aspects of AI/ML framework           Lenovo

R1-2303581         General aspects of AI/ML framework           Qualcomm Incorporated

R1-2303630         Discussion on general aspects of AI/ML framework   KDDI Corporation

R1-2303648         Discussion on AI/ML framework    Rakuten Mobile, Inc

R1-2303649         General Aspects of AI/ML framework          AT&T

R1-2303668         Discussion on general aspects of AI ML framework   NEC

R1-2303704         Discussion on general aspects of AI/ML framework   NTT DOCOMO, INC.

R1-2303809         Discussions on Common Aspects of AI/ML Framework           TCL Communication Ltd.

 

[112bis-e-R18-AI/ML-01] – Taesang (Qualcomm)

Email discussion on general aspects of AI/ML by April 26th

-        Check points: April 21, April 26

R1-2304049        Summary#1 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

Presented in April 18th GTW session.

 

R1-2304050        Summary#2 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

From April 21st GTW session

Agreement

·        For AI/ML functionality identification and functionality-based LCM of UE-side models and/or UE-part of two-sided models:

o   Functionality refers to an AI/ML-enabled Feature/FG enabled by configuration(s), where configuration(s) is(are) supported based on conditions indicated by UE capability.

o   Correspondingly, functionality-based LCM operates based on, at least, one configuration of AI/ML-enabled Feature/FG or specific configurations of an AI/ML-enabled Feature/FG.

§  FFS: Signaling to support functionality-based LCM operations, e.g., to activate/deactivate/fallback/switch AI/ML functionalities

§  FFS: Whether/how to address additional conditions (e.g., scenarios, sites, and datasets) to aid UE-side transparent model operations (without model identification) at the Functionality level

§  FFS: Other aspects that may constitute Functionality

o   FFS: which aspects should be specified as conditions of a Feature/FG available for functionality will be discussed in each sub-use-case agenda.

·        For AI/ML model identification and model-ID-based LCM of UE-side models and/or UE-part of two-sided models:

o   model-ID-based LCM operates based on identified models, where a model may be associated with specific configurations/conditions associated with UE capability of an AI/ML-enabled Feature/FG and additional conditions (e.g., scenarios, sites, and datasets) as determined/identified between UE-side and NW-side.

o   FFS: Which aspects should be considered as additional conditions, and how to include them into model description information during model identification will be discussed in each sub-use-case agenda.

o   FFS: Relationship between functionality and model, e.g., whether a model may be identified referring to functionality(s).

o   FFS: relationship between functionality-based LCM and model-ID-based LCM

·        Note: Applicability of functionality-based LCM and model-ID-based LCM is a separate discussion.

 

R1-2304051        Summary#3 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

From April 25th GTW session

Conclusion

From RAN1 perspective, it is clarified that an AI/ML model identified by a model ID may be logical, and how it maps to physical AI/ML model(s) may be up to implementation.

·        When distinction is necessary for discussion purposes, companies may use the term a logical AI/ML model to refer to a model that is identified and assigned a model ID, and physical AI/ML model(s) to refer to an actual implementation of such a model.

 

R1-2304052        Summary#4 of General Aspects of AI/ML Framework        Moderator (Qualcomm)

From April 26th GTW session

Agreement

·        Study necessity, mechanisms, after functionality identification, for UE to report updates on applicable functionality(es) among [configured/identified] functionality(es), where the applicable functionalities may be a subset of all [configured/identified] functionalities.

·        Study necessity, mechanisms, after model identification, for UE to report updates on applicable UE part/UE-side model(s), where the applicable models may be a subset of all identified models.

 

Decision: As per email decision posted on April 28th,

Working Assumption

The definition of ‘AI/ML model transfer’ is revised (marked in red) as follows:

AI/ML model transfer

Delivery of an AI/ML model over the air interface in a manner that is not transparent to 3GPP signaling, either parameters of a model structure known at the receiving end or a new model with parameters. Delivery may contain a full model or a partial model.

 

Working Assumption

Model selection

The process of selecting an AI/ML model for activation among multiple models for the same AI/ML enabled feature.

Note: Model selection may or may not be carried out simultaneously with model activation

 

 

Final summary in R1-2304054.

9.2.2        AI/ML for CSI feedback enhancement

9.2.2.1       Evaluation on AI/ML for CSI feedback enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2302319         Discussion and evaluation of AI/ML for CSI feedback enhancement               FUTUREWEI

R1-2302358         Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2302437         Evaluation on AI CSI feedback enhancement              ZTE

R1-2302477         Evaluation on AI/ML for CSI feedback enhancement vivo

R1-2302540         Evaluation methodology and results on AI/ML for CSI feedback enhancement               OPPO

R1-2302593         Discussion on evaluation on AIML for CSI feedback enhancement        Spreadtrum Communications, BUPT

R1-2302628         Evaluation of ML for CSI feedback enhancement       Nokia, Nokia Shanghai Bell

R1-2302637         Evaluation of AI/ML based methods for CSI feedback enhancement      Fraunhofer IIS

R1-2302695         Evaluation on AI/ML-based CSI feedback enhancement           CATT

R1-2302790         Evaluation for CSI feedback enhancements  Intel Corporation

R1-2302822         Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2302904         Evaluation on AI/ML for CSI feedback enhancement Fujitsu

R1-2302918         Evaluations of AI-CSI       Ericsson

R1-2302975         Discussion on evaluation on AI/ML for CSI feedback enhancement       xiaomi

R1-2303050         On Evaluation of AI/ML based CSI Google

R1-2303076         Evaluation on AI/ML for CSI feedback enhancement LG Electronics

R1-2303087         Evaluation on AI  for CSI feedback enhancement       Mavenir

R1-2303120         Evaluation on AI ML for CSI feedback enhancement Samsung

R1-2303174         Evaluation of AI and ML for CSI feedback enhancement         RAN1, Comba

R1-2303183         Some discussions on evaluation on AI-ML for CSI feedback   CAICT

R1-2303194         Evaluation on AI/ML for CSI feedback enhancement ETRI

R1-2303224         Discussion on evaluation on AI/ML for CSI feedback enhancement       CMCC

R1-2303336         Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.

R1-2303435         Evaluation of AI and ML for CSI feedback enhancement         NVIDIA

R1-2303475         Evaluation for AI/ML based CSI feedback enhancement          Apple

R1-2303524         Evaluation on AI/ML for CSI feedback         Lenovo

R1-2303582         Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated

R1-2303654         Discussion on AI/ML for CSI feedback enhancement AT&T

R1-2303705         Discussion on evaluation on AI/ML for CSI feedback enhancement       NTT DOCOMO, INC.

R1-2303776         Evaluation on AI/ML for CSI feedback enhancement Indian Institute of Tech (H)

 

[112bis-e-R18-AI/ML-02] – Yuan (Huawei)

Email discussion on evaluation on CSI feedback enhancement by April 26th

-        Check points: April 21, April 26

R1-2303988        Summary#1 for [112bis-e-R18-AIML-02] Moderator (Huawei)

From April 18th GTW session

Agreement

For the rank >1 options under AI/ML-based CSI compression, for a given configured Max rank=K, the complexity of FLOPs is reported as the maximum FLOPs over all ranks each includes the summation of FLOPs for inference per layer if applicable, e.g.,

·        Option 1-1 (rank specific): Max FLOPs over K rank specific models.

·        Option 1-2 (rank common): FLOPs of the rank common model.

·        Option 2-1 (layer specific and rank common): Sum of the FLOPs of K models (for the rank=K).

·        Option 2-2 (layer specific and rank specific): Max of the FLOPs over K ranks, k=1,…K, each with a sum of k models.

·        Option 3-1 (layer common and rank common): K * FLOPs of the common model.

·        Option 3-2 (layer common and rank specific): Max of the FLOPs over K ranks, k=1,…K, each with k * FLOPs of the layer common model.

Agreement

For the rank >1 options under AI/ML-based CSI compression, the storage of memory storage/number of parameters is reported as the summation of memory storage/number of parameters over all models potentially used for any layer/rank, e.g.,

·        Option 1-1 (rank specific)/Option 3-2 (layer common and rank specific): Sum of memory storage/number of parameters over all rank specific models.

·        Option 1-2 (rank common): A single memory storage/number of parameters for the rank common model.

·        Option 2-1 (layer specific and rank common): Sum of memory storage/number of parameters over all layer specific models.

·        Option 2-2 (layer specific and rank specific): Sum of memory storage/number of parameters for the specific models over all ranks and all layers in per rank.

·        Option 3-1 (layer common and rank common): A single memory storage/number of parameters for the common model.

 

R1-2303989        Summary#2 for [112bis-e-R18-AIML-02] Moderator (Huawei)

From April 20th GTW session

Working assumption

For the forms of the intermediate KPI results for the following templates:

Table 2. Evaluation results for CSI compression with model generalization

Table 3. Evaluation results for CSI compression with model scalability,

Table 4. Evaluation results for CSI compression of multi-vendor joint training without model generalization/scalability,

Table 5. Evaluation results for CSI compression of separate training without model generalization/scalability,

Table 7. Evaluation results for CSI prediction with model generalization

·        The intermediate KPI results are in forms of absolute values and the gain over benchmark, e.g., in terms of “absolute value (gain over benchmark)”

·        The intermediate KPI results are in forms of linear value for SGCS and dB value for NMSE

Working Assumption

For the per layer CSI payload size X/Y/Z in the templates of CSI compression, as a clarification, the X/Y/Z ranges in the working assumption achieved in RAN1#112 meeting is applicable to Max rank = 1/2. For Max rank () = 3/4, the per layer basis X/Y/Z ranges are re-determined as:

·      X is <=bits

·      Y is bits-bits

·      Z is >=bits

Working Assumption

For the template of Table 1. Evaluation results for CSI compression of 1-on-1 joint training without model generalization/scalability, the CSI feedback reduction is provided for 3 CSI feedback overhead ranges, where for each CSI feedback overhead range of the benchmark, it is calculated as the gap between the CSI feedback overhead of benchmark and the CSI feedback overhead of AI/ML corresponding to the same mean UPT.

·        Note: the CSI feedback overhead reduction and gain for mean/5%tile UPT are determined at the same payload size for benchmark scheme

CSI feedback reduction (%)  (for a given CSI feedback overhead in the benchmark scheme)

[X*Max rank value], RU<=39%

[Y*Max rank value], RU<=39%

[Z*Max rank value], RU<=39%

[X*Max rank value], RU 40%-69%

[Y*Max rank value], RU 40%-69%

[Z*Max rank value], RU 40%-69%

[X*Max rank value], RU >=70%

[Y*Max rank value], RU >=70%

[Z*Max rank value], RU >=70%

 

Note: for result collection for the generalization verification of AI/ML based CSI compression over various deployment scenarios, till the RAN1#112bis-e meeting,

 

Agreement

For the AI/ML based CSI prediction, add an entry for “Table 6. Evaluation results for CSI prediction without model generalization/scalability” to report the Codebook type for CSI report.

Assumption

UE speed

CSI feedback periodicity

Observation window (number/distance)

Prediction window (number/distance [between prediction instances/distance from the last observation instance to the 1st prediction instance])

Whether/how to adopt spatial consistency

Codebook type for CSI report

 

 

R1-2303990         Summary#3 for [112bis-e-R18-AIML-02]    Moderator (Huawei)

R1-2303991        Summary#4 for [112bis-e-R18-AIML-02] Moderator (Huawei)

From April 24th GTW session

Agreement

To evaluate the performance of the intermediate KPI based monitoring mechanism for CSI compression, the model monitoring methodology is considered as:

 

Agreement

To evaluate the performance of the intermediate KPI based monitoring mechanism for CSI compression, for Step2 of the model monitoring methodology, the per sample  is considered for

 

 

R1-2303992         Summary#5 for [112bis-e-R18-AIML-02]    Moderator (Huawei)

R1-2303993        Summary#6 for [112bis-e-R18-AIML-02] Moderator (Huawei)

Decision: As per email decision posted on April 26th,

Conclusion

For the evaluation of CSI enhancements, when reporting the computational complexity including the pre-processing and post-processing, the complexity metric of FLOPs may be reported separately for the AI/ML model and the pre/post processing.

·        How to calculate the FLOPs for pre/post processing is up to companies.

·        While reporting the FLOPs of pre-processing and post-processing the following boundaries are considered.

o   Estimated raw channel matrix per each frequency unit as an input for pre-processing of the CSI generation part

o   Precoding vectors per each frequency unit as an output of post-processing of the CSI reconstruction part

Agreement

For the evaluation of CSI compression, companies are allowed to report (by introducing an additional field in the template to describe) the specific CQI determination method(s) for AI/ML, e.g.,

·        Option 2a: CQI is calculated based on CSI reconstruction output, if CSI reconstruction model is available at the UE and UE can perform reconstruction model inference with potential adjustment

o   Option 2a-1: The CSI reconstruction part for CQI calculation at the UE same as the actual CSI reconstruction part at the NW

o   Option 2a-2: The CSI reconstruction part for CQI calculation at the UE is a proxy model, which is different from the actual CSI reconstruction part at the NW

·        Option 2b: CQI is calculated using two stage approach, UE derives CQI using precoded CSI-RS transmitted with a reconstructed precoder

·        Option 1a: CQI is calculated based on the target CSI from the realistic channel estimation

·        Option 1b: CQI is calculated based on the target CSI from the realistic channel estimation and potential adjustment

·        Option 1c: CQI is calculated based on traditional codebook

·        Other options if adopted, to be described by companies

Agreement

For the AI/ML based CSI prediction sub use case, if collaboration level x is reported as the benchmark, the EVM to distinguish level x and level y/z based AI/ML CSI prediction is considered from the generalization aspect.

·        E.g., collaboration level y/z based CSI prediction is modeled as the fine-tuning case or generalization Case 1, while collaboration level x based CSI prediction is modeled as generalization Case 2 or Case 3.

 

From April 26th GTW session

Agreement

To evaluate the performance of the intermediate KPI based monitoring mechanism for CSI compression,  is in forms of

 

Working Assumption

For the template of Table 1. Evaluation results for CSI compression of 1-on-1 joint training without model generalization/scalability, the CSI feedback overhead for the metric of eventual KPI (e.g., mean/5% UPT) is re-determined as:

·         CSI feedback overhead A: <=β* 80 bits.

·         CSI feedback overhead B: β* (100bits – 140 bits).

·         CSI feedback overhead C: >=β* 230 bits.

·         Note: β=1 for max rank = 1, andβ=1.5 for max rank = 2/3/4.

·         FFS for rank 2/3/4, whether to add an additional CSI feedback overhead D: >=γ* 230 bits, γ= [1.9], and limit the range of CSI feedback overhead C as:β* 230 bits-γ* 230 bits.

·         Note: companies additionally report the exact CSI feedback overhead they considered

 

Observation

For the scalability verification of AI/ML based CSI compression over various CSI payload sizes, till the RAN1#112bis-e meeting, compared to the generalization Case 1 where the AI/ML model is trained with dataset subject to a certain CSI payload size#B and applied for inference with a same CSI payload size#B,

 

Observation

For the AI/ML based CSI prediction, till the RAN1#112bis-e meeting,

 

Agreement

For the AI/ML based CSI compression, for the submission of simulation results to the RAN1#113 meeting, for Table 1. Evaluation results for CSI compression of 1-on-1 joint training without model generalization/scalability, companies are encouraged to take the following assumptions as baseline for the calibration purpose:

 

Agreement

For the AI/ML based CSI prediction, for the submission of simulation results to the RAN1#113 meeting,

 

 

Final summary in R1-2304247.

9.2.2.2       Other aspects on AI/ML for CSI feedback enhancement

Including potential specification impact.

 

R1-2302320         Discussion on other aspects of AI/ML for CSI feedback enhancement               FUTUREWEI

R1-2302359         Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2302438         Discussion on other aspects for AI CSI feedback enhancement ZTE

R1-2302478         Other aspects on AI/ML for CSI feedback enhancement           vivo

R1-2302541         On sub use cases and other aspects of AI/ML for CSI feedback enhancement               OPPO

R1-2302594         Discussion on other aspects on AIML for CSI feedback            Spreadtrum Communications

R1-2302629         Other aspects on ML for CSI feedback enhancement  Nokia, Nokia Shanghai Bell

R1-2302696         Discussion on AI/ML-based CSI feedback enhancement          CATT

R1-2302750         Discussion on AI/ML for CSI feedback enhancement NEC

R1-2302791         On other aspects on AI/ML for CSI feedback              Intel Corporation

R1-2302823         Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2302842         Considerations on CSI measurement enhancements via AI/ML Sony

R1-2302905         Views on specification impact for CSI feedback enhancement Fujitsu

R1-2302919         Discussion on AI-CSI        Ericsson

R1-2302976         Discussion on specification impact for CSI feedback based on AI/ML   Xiaomi

R1-2303026         Discussion on AI/ML for CSI feedback enhancement China Telecom

R1-2303038         Discussion on AI/ML for CSI feedback enhancement Panasonic

R1-2303051         On Enhancement of AI/ML based CSI           Google

R1-2303077         Other aspects on AI/ML for CSI feedback enhancement           LG Electronics

R1-2303121         Discussion on potential specification impact for CSI feedback enhancement               Samsung

R1-2303184         Discussions on AI-ML for CSI feedback       CAICT

R1-2303195         Discussion on other aspects on AI/ML for CSI feedback enhancement  ETRI

R1-2303225         Discussion on other aspects on AI/ML for CSI feedback enhancement  CMCC

R1-2303337         Other aspects on AI/ML for CSI feedback enhancement           MediaTek Inc.

R1-2303436         AI and ML for CSI feedback enhancement   NVIDIA

R1-2303476         Discussion on other aspects of AI/ML for CSI enhancement    Apple

R1-2303525         Further aspects of AI/ML for CSI feedback  Lenovo

R1-2303583         Other aspects on AI/ML for CSI feedback enhancement           Qualcomm Incorporated

R1-2303655         Discussion on AI/ML for CSI feedback enhancement AT&T

R1-2303706         Discussion on other aspects on AI/ML for CSI feedback enhancement  NTT DOCOMO, INC.

R1-2303810         Discussions on CSI measurement enhancement for AI/ML communication          TCL Communication Ltd.

 

[112bis-e-R18-AI/ML-03] – Huaning (Apple)

Email discussion on other aspects on AI/ML for CSI feedback enhancement by April 26th

-        Check points: April 21, April 26

R1-2303979        Summary #1 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From April 18th GTW session

Agreement

The study of AI/ML based CSI compression should be based on the legacy CSI feedback signaling framework. Further study potential specification enhancement on

·        CSI-RS configurations (No discussion on CSI-RS pattern design enhancements)

·        CSI reporting configurations

·        CSI report UCI mapping/priority/omission

·        CSI processing procedures.

·        Other aspects are not precluded.

Agreement

In CSI compression using two-sided model use case, for UE-side monitoring, further study potential specification impact on triggering and means for reporting the monitoring metrics, including periodic/semi-persistent and aperiodic reporting, and other reporting initiated from UE.

 

 

R1-2303980        Summary #2 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From April 20th GTW session

Agreement

In CSI prediction using UE-side model use case, whether to address the potential spec impact of CSI prediction depends on RAN#100 final conclusion, focusing on the following

 

 

R1-2303981        Summary #3 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From April 24th GTW session

Agreement

In CSI compression using two-sided model use case, for NW-side monitoring, further study the necessity, feasibility and potential specification impact to enable performance monitoring using an existing CSI feedback scheme as the reference.

 

 

R1-2303982        Summary #4 on other aspects of AI/ML for CSI enhancement          Moderator (Apple)

From April 26th GTW session

Conclusion

In CSI compression using two-sided model use case, gradient-exchange based sequential training over the air interface is deprioritized in R18 SI.

 

Agreement

In CSI compression using two-sided model use case, further study the necessity and potential specification impact of the following aspects related to the ground truth CSI format for NW side data collection for model training:   

·        Scalar quantization for ground-truth CSI

o   FFS: any processing applied to the ground-truth CSI before scalar quantization, based on evaluation results in 9.2.2.1

·        Codebook-based quantization for ground-truth CSI

o   FFS: Parameter set enhancement of existing eType II codebook, based on evaluation results in 9.2.2.1

·        Number of layers for which the ground truth data is collected. And whether UE or NW determine the number of layers for ground-truth CSI data collection.

Agreement

In CSI compression using two-sided model use case, further study the necessity and potential specification impact on quantization alignment, including at least:

·        For vector quantization scheme,

o   The format and size of the VQ codebook

o   Size and segmentation method of the CSI generation model output

·        For scalar quantization scheme,

o   Uniform and non-uniform quantization

o   The format, e.g., quantization granularity, the distribution of bits assigned to each float.

·        Quantization alignment using 3GPP aware mechanism.

 

Final summary in R1-2303983.

9.2.3        AI/ML for beam management

9.2.3.1       Evaluation on AI/ML for beam management

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2302321         Discussion and evaluation of AI/ML for beam management     FUTUREWEI

R1-2302360         Evaluation on AI/ML for beam management Huawei, HiSilicon

R1-2302439         Evaluation on AI beam management             ZTE

R1-2302479         Evaluation on AI/ML for beam management vivo

R1-2302542         Evaluation methodology and results on AI/ML for beam management   OPPO

R1-2302595         Evaluation on AI/ML for beam management Spreadtrum Communications

R1-2302630         Evaluation of ML for beam management      Nokia, Nokia Shanghai Bell

R1-2302697         Evaluation on AI/ML-based beam management          CATT

R1-2302792         Evaluations for AI/ML beam management   Intel Corporation

R1-2302825         Discussion for evaluation on AI/ML for beam management     InterDigital, Inc.

R1-2302878         Evaluation of AIML for beam management  Ericsson

R1-2302906         Evaluation on AI/ML for beam management Fujitsu

R1-2302977         Evaluation on AI/ML for beam management xiaomi

R1-2303052         On Evaluation of AI/ML based Beam Management    Google

R1-2303078         Evaluation on AI/ML for beam management LG Electronics

R1-2303122         Evaluation on AI ML for Beam management              Samsung

R1-2303185         Some discussions on evaluation on AI-ML for Beam management         CAICT

R1-2303226         Discussion on evaluation on AI/ML for beam management      CMCC

R1-2303301         Evaluation on AI/ML for beam management CEWiT

R1-2303338         Evaluation on AI/ML for beam management MediaTek Inc.

R1-2303437         Evaluation of AI and ML for beam management         NVIDIA

R1-2303477         Evaluation for AI/ML based beam management enhancements Apple

R1-2303526         Evaluation on AI/ML for beam management Lenovo

R1-2303584         Evaluation on AI/ML for beam management Qualcomm Incorporated

R1-2303707         Discussion on evaluation on AI/ML for beam management      NTT DOCOMO, INC.

 

[112bis-e-R18-AI/ML-04] – Feifei (Samsung)

Email discussion on evaluation on AI/ML for beam management by April 26th - extended to April 28th

-        Check points: April 21, April 26

R1-2303994        Feature lead summary #0 evaluation of AI/ML for beam management               Moderator (Samsung)

From April 18th GTW session

Agreement

Agreement

 

 

R1-2303995        Feature lead summary #1 evaluation of AI/ML for beam management               Moderator (Samsung)

From April 20th GTW session

Conclusion

 

Agreement

At least for evaluation on the performance of DL Tx beam prediction, consider the following options for Rx beam for providing input for AI/ML model for training and/or inference if applicable

Other options are not precluded and can be reported by companies.

 

Observation

·        At least for BM-Case1 for inference of DL Tx beam with L1-RSRPs of all beams in Set B, existing quantization granularity of L1-RSRP (i.e., 1dB for the best beam, 2dB for the difference to the best beam) causes [a minor loss x%~y%, if applicable] in beam prediction accuracy compared to unquantized L1-RSRPs of beams in Set B.

 

R1-2303996        Feature lead summary #2 evaluation of AI/ML for beam management               Moderator (Samsung)

From April 24th GTW session

Agreement

 

 

R1-2303997        Feature lead summary #3 evaluation of AI/ML for beam management               Moderator (Samsung)

From April 26th GTW session

Observation

 

Agreement

For performance evaluation of AI/ML based DL Tx beam prediction for BM-Case1 and BM-Case2, optionally study the performance with a quasi-optimal Rx beam (i.e., not all the measurements as inputs of AI/ML are from the “best” Rx beam) with less measurement/RS overhead compared to exhaustive Rx beam sweeping.

o   Opt A: Identify the quasi-optimal Rx beams to be utilized for measuring Set B/Set C based on the previous measurements.

§  Companies can report the time information and beam type (e.g., whether the same Tx beam(s) in Set B) of the reference signal to use.

§  Companies report how to find the quasi-optimal Rx beam with “previous measurement”

o   FFS: Opt B: The Rx beams for measuring Set B/Set C consist of the X% of “best” Rx beam exhaustive Rx beam sweeping and (1-X%) of random Rx beams [or the adjacent Rx beam to the “best” Rx beam].

§  X%= 80% or 90%, or other values reported by companies.

§  Note: X% is the percentage of measurements with “best” Rx beams out of all measurements  

o   Other options are not precluded.

·        Companies report the measurement/RS overhead together with beam prediction accuracy.

 

Conclusion

To evaluate the performance of BM-Case1 for both DL Tx beam and pair prediction, aiming to analysis the following aspects:

 

 

Decision: As per email decision posted on April 28th,

Observation

 

Conclusion

To evaluate the performance of BMCase-2 for both DL Tx beam and pair prediction, aiming to analysis the following aspects:

9.2.3.2       Other aspects on AI/ML for beam management

Including potential specification impact.

 

R1-2302322         Discussion on other aspects of AI/ML for beam management  FUTUREWEI

R1-2302361         Discussion on AI/ML for beam management Huawei, HiSilicon

R1-2302432         Discussion on other aspects of AI/ML beam management        New H3C Technologies Co., Ltd.

R1-2302440         Discussion on other aspects for AI beam management              ZTE

R1-2302480         Other aspects on AI/ML for beam management          vivo

R1-2302543         Other aspects of AI/ML for beam management           OPPO

R1-2302596         Other aspects on AI/ML for beam management          Spreadtrum Communications

R1-2302631         Other aspects on ML for beam management Nokia, Nokia Shanghai Bell

R1-2302698         Discussion on AI/ML-based beam management          CATT

R1-2302793         Other aspects on AI/ML for beam management          Intel Corporation

R1-2302826         Discussion for other aspects on AI/ML for beam management InterDigital, Inc.

R1-2302843         Consideration on AI/ML for beam management          Sony

R1-2302868         Discussion on AI/ML for beam management Panasonic

R1-2302883         Discussion on AI/ML for beam management Ericsson

R1-2302907         Discussion for specification impacts on AI/ML for beam management  Fujitsu

R1-2302978         Potential specification impact on AI/ML for beam management             xiaomi

R1-2303053         On Enhancement of AI/ML based Beam Management              Google

R1-2303079         Other aspects on AI/ML for beam management          LG Electronics

R1-2303123         Discussion on potential specification impact for beam management       Samsung

R1-2303186         Discussions on AI-ML for Beam management            CAICT

R1-2303196         Discussion on other aspects on AI/ML for beam management  ETRI

R1-2303227         Discussion on other aspects on AI/ML for beam management  CMCC

R1-2303339         Other aspects on AI/ML for beam management          MediaTek Inc.

R1-2303438         AI and ML for beam management  NVIDIA

R1-2303478         Discussion on other aspects of AI/ML for beam management enhancement               Apple

R1-2303527         Further aspects of AI/ML for beam management        Lenovo

R1-2303585         Other aspects on AI/ML for beam management          Qualcomm Incorporated

R1-2303669         Discussion on AI/ML for beam management NEC

R1-2303708         Discussion on other aspects on AI/ML for beam management  NTT DOCOMO, INC.

 

[112bis-e-R18-AI/ML-05] – Zhihua (OPPO)

Email discussion on other aspects of AI/ML for beam management by April 26th

-        Check points: April 21, April 26

R1-2303966        Summary#1 for other aspects on AI/ML for beam management       Moderator (OPPO)

From April 18th GTW session

Agreement

Regarding the data collection at UE side for UE-side AI/ML model, study the potential specification impact of UE reporting to network from the following aspect

·        Supported/preferred configurations of DL RS transmission

·        Other aspect(s) is not precluded

 

R1-2303967        Summary#2 for other aspects on AI/ML for beam management       Moderator (OPPO)

From April 20th GTW session

Agreement

Regarding the data collection at UE side for UE-side AI/ML model, study the potential specification impact (if any) to initiate/trigger data collection from RAN1 point of view by considering the following options as a starting point

 

 

R1-2303968        Summary#3 for other aspects on AI/ML for beam management       Moderator (OPPO)

Presented in April 24th GTW session.

 

R1-2303969        Summary#4 for other aspects on AI/ML for beam management       Moderator (OPPO)

From April 26th GTW session

Agreement

Regarding data collection for NW-side AI/ML model, study the following options (including the combination of options) for the contents of collected data,

 

Agreement

Regarding data collection for NW-side AI/ML model, study necessity, benefits and beam-management-specific potential specification impact from RAN1 point of view on the following additional aspects

 

Decision: As per email decision posted on April 28th,

Agreement

For AI/ML performance monitoring for BM-Case1 and BM-Case2, study potential specification impact of at least the following alternatives as the benchmark/reference (if applicable) for performance comparison:

·        Alt.1: The best beam(s) obtained by measuring beams of a set indicated by gNB (e.g., Beams from Set A)

o   FFS: gNB configures one or multiple sets for one or multiple benchmarks/references

·        Alt.4: Measurements of the predicted best beam(s) corresponding to model output (e.g., Comparison between actual L1-RSRP and predicted RSRP of predicted Top-1/K Beams)

·        FFS:

o   Alt.3: The beam corresponding to some or all the indicated/activated TCI state(s)   

·        Other alternative is not precluded. 

 

 

Final summary in R1-2303970.

9.2.4        AI/ML for positioning accuracy enhancement

9.2.4.1       Evaluation on AI/ML for positioning accuracy enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2302335         Evaluation of AI/ML for Positioning Accuracy Enhancement  Ericsson

R1-2302362         Evaluation on AI/ML for positioning accuracy enhancement    Huawei, HiSilicon

R1-2302441         Evaluation on AI positioning enhancement   ZTE

R1-2302481         Evaluation on AI/ML for positioning accuracy enhancement    vivo

R1-2302544         Evaluation methodology and results on AI/ML for positioning accuracy enhancement               OPPO

R1-2302632         Evaluation of ML for positioning accuracy enhancement          Nokia, Nokia Shanghai Bell

R1-2302699         Evaluation on AI/ML-based positioning enhancement CATT

R1-2302908         Discussions on evaluation results of AIML positioning accuracy enhancement               Fujitsu

R1-2302979         Evaluation on AI/ML for positioning accuracy enhancement    xiaomi

R1-2303054         On Evaluation of AI/ML based Positioning  Google

R1-2303080         Evaluation on AI/ML for positioning accuracy enhancement    LG Electronics

R1-2303124         Evaluation on AI ML for Positioning            Samsung

R1-2303187         Some discussions on evaluation on AI-ML for positioning accuracy enhancement               CAICT

R1-2303228         Discussion on evaluation on AI/ML for positioning accuracy enhancement               CMCC

R1-2303340         Evaluation of AIML for Positioning Accuracy Enhancement   MediaTek Inc.

R1-2303439         Evaluation of AI and ML for positioning enhancement             NVIDIA

R1-2303450         Evaluation on AI/ML for positioning accuracy enhancement    InterDigital, Inc.

R1-2303926         Evaluation on AI/ML for positioning accuracy enhancement    Apple    (rev of R1-2303479)

R1-2303528         Discussion on AI/ML Positioning Evaluations            Lenovo

R1-2303586         Evaluation on AI/ML for positioning accuracy enhancement    Qualcomm Incorporated

 

[112bis-e-R18-AI/ML-06] – Yufei (Ericsson)

Email discussion on evaluation on AI/ML for positioning accuracy enhancement by April 26th - extended till April 28th

-        Check points: April 21, April 26

R1-2304016         Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

R1-2304017        Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From April 18th GTW session

Agreement

For evaluation of both the direct AI/ML positioning and AI/ML assisted positioning, company optionally adopt delay profile (DP) as a type of information for model input.

·        DP is a degenerated version of PDP, where the path power is not provided.

Agreement

For the evaluation of AI/ML based positioning, the study of model input due to different number of TRPs include the following approaches. Proponent of each approach provide analysis for model performance, signaling overhead (including training data collection and model inference), model complexity and computational complexity.

 

Agreement

In the evaluation of AI/ML based positioning, if N’TRP<18, the set of N’TRP TRPs that provide measurements to model input of an AI/ML model are reported using the TRP indices shown below.

A picture containing electronics

Description automatically generated

 

R1-2304018         Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

R1-2304019        Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From April 20th GTW session

Agreement

For AI/ML assisted positioning with TOA as model output, study the impact of labelling error to TOA accuracy and/or positioning accuracy.

o    Value L is up to sources.

 

Agreement

For AI/ML assisted positioning with LOS/NLOS indicator as model output, study the impact of labelling error to LOS/NLOS indicator accuracy and/or positioning accuracy.

o    Value m and n are up to sources.

 

 

R1-2304103        Summary #5 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From April 24th GTW session

Agreement

For the evaluation of AI/ML based positioning method, the measurement size and signalling overhead for the model input is reported.

 

Observation

For AI/ML based positioning method, companies have submitted evaluation results to show that for their evaluated cases, for a given company’s model design, a lower complexity (model complexity and computational complexity) model can still achieve acceptable positioning accuracy (e.g., <1m), albeit degraded, when compared to a higher complexity model.

Note: For easy reference, sources include CMCC (R1-2303228), InterDigital (R1-2303450), Ericsson (R1-2302335), Huawei/HiSilicon (R1-2302362), CATT (R1-2302699), Nokia (R1-2302632).

 

Observation

For direct AI/ML positioning, for L in the range of 0.25m to 5m, the positioning error increases approximately in proportion to L, where L (in meters) is the standard deviation of truncated Gaussian Distribution of the ground truth label error.

 

 

R1-2304104         Summary #6 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

R1-2304105         Summary #7 of Evaluation on AI/ML for positioning accuracy enhancement               Moderator (Ericsson)

From April 26th GTW session

Observation

For AI/ML assisted positioning, evaluation results have been provided by sources for label-based model monitoring methods. With TOA and/or LOS/NLOS indicator as model output, the estimated ground truth label (i.e., TOA and/or LOS/NLOS indicator) is provided by the location estimation from the associated conventional positioning method. The associated conventional positioning method refers to the method which utilizes the AI/ML model output to determine target UE location.

Note: Sources include vivo (R1-2302481), MediaTek (R1-2303340), Ericsson (R1-2302335)

 

Observation

For both direct AI/ML and AI/ML assisted positioning, evaluation results have been provided by sources to demonstrate the feasibility of label-free model monitoring methods.

Note: Sources include vivo (R1-2302481), CATT (R1-2302699), MediaTek (R1-2303340), Ericsson (R1-2302335), Nokia (R1-2302632).

 

 

Decision: As per email decision posted on April 28th,

Observation

For both direct AI/ML and AI/ML assisted positioning, evaluation results submitted to RAN1#112bis show that with CIR model input for a trained model,

Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.

 

Observation

For direct AI/ML positioning, based on evaluation results of timing error in the range of 0-50 ns, when the model is trained by a dataset with UE/gNB RX and TX timing error t1 (ns) and tested in a deployment scenario with UE/gNB RX and TX timing error t2 (ns), for a given t1,

Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.

 

Observation

For direct AI/ML positioning, based on evaluation results of network synchronization error in the range of 0-50 ns, when the model is trained by a dataset with network synchronization error t1 (ns) and tested in a deployment scenario with network synchronization error t2 (ns), for a given t1,

Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.

 

 

Final summary in R1-2304106.

9.2.4.22       Other aspects on AI/ML for positioning accuracy enhancement

Including potential specification impact.

 

R1-2302336         Other Aspects of AI/ML Based Positioning Enhancement        Ericsson

R1-2302363         Discussion on AI/ML for positioning accuracy enhancement   Huawei, HiSilicon

R1-2302442         Discussion on other aspects for AI positioning enhancement    ZTE

R1-2302482         Other aspects on AI/ML for positioning accuracy enhancement              vivo

R1-2302545         On sub use cases and other aspects of AI/ML for positioning accuracy enhancement               OPPO

R1-2302597         Discussion on other aspects on AIML for positioning accuracy enhancement               Spreadtrum Communications

R1-2302633         Other aspects on ML for positioning accuracy enhancement     Nokia, Nokia Shanghai Bell

R1-2302700         Discussion on AI/ML-based positioning enhancement              CATT

R1-2302739         Other aspects on AI-ML for positioning accuracy enhancement              Baicells

R1-2302844         Discussions on AI-ML for positioning accuracy enhancement Sony

R1-2302909         Discussions on specification impacts for AIML positioning accuracy enhancement               Fujitsu

R1-2302980         Views on the other aspects of AI/ML-based positioning accuracy enhancement               xiaomi

R1-2303055         On Enhancement of AI/ML based Positioning             Google

R1-2303081         Other aspects on AI/ML for positioning accuracy enhancement              LG Electronics

R1-2303125         Discussion on potential specification impact for Positioning    Samsung

R1-2303188         Discussions on AI-ML for positioning accuracy enhancement CAICT

R1-2303229         Discussion on other aspects on AI/ML for positioning accuracy enhancement               CMCC

R1-2303341         Other Aspects on AI ML Based Positioning Enhancement        MediaTek Inc.

R1-2303413         On potential AI/ML solutions for positioning              Fraunhofer IIS, Fraunhofer HHI

R1-2303440         AI and ML for positioning enhancement       NVIDIA

R1-2303451         Designs and potential specification impacts of AIML for positioning     InterDigital, Inc.

R1-2303480         On Other aspects on AI/ML for positioning accuracy enhancement        Apple

R1-2303529         AI/ML Positioning use cases and associated Impacts Lenovo

R1-2303587         Other aspects on AI/ML for positioning accuracy enhancement              Qualcomm Incorporated

R1-2303675         Discussion on AI/ML for positioning accuracy enhancement   NEC

R1-2303709         Discussion on other aspects on AI/ML for positioning accuracy enhancement     NTT DOCOMO, INC.

 

[112bis-e-R18-AI/ML-07] – Huaming (vivo)

Email discussion on other aspects of AI/ML for positioning accuracy enhancement by April 26th

-        Check points: April 21, April 26

R1-2303940        FL summary #1 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

From April 18th GTW session

Agreement

Regarding monitoring for AI/ML based positioning, at least the following entities are identified to derive monitoring metric

·        UE at least for Case 1 and 2a (with UE-side model)

·        gNB at least for Case 3a (with gNB-side model)

·        LMF at least for Case 2b and 3b (with LMF-side model)

 

R1-2304056        FL summary #2 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

From April 20th GTW session

Working Assumption

Regarding data collection at least for model training for AI/ML based positioning, at least the following information of data with potential specification impact are identified.

 

Agreement

Regarding monitoring for AI/ML based positioning, at least the following aspects are identified for further study on benefit(s), feasibility, necessity and potential specification impact for each case (Case 1 to 3b)

 

 

R1-2304102        FL summary #3 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

Presented in April 24th GTW session.

 

R1-2304177        FL summary #4 of other aspects on AI/ML for positioning accuracy enhancement      Moderator (vivo)

From April 26th GTW session

Agreement

Regarding LCM of AI/ML based positioning accuracy enhancement, at least for Case 1 and Case 2a (model is at UE-side), further study the following aspects on information related to the conditions


 RAN1#113

9.2      Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface

Please refer to RP-221348 for detailed scope of the SI.

 

R1-2306142         Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface) Ad-hoc Chair (CMCC)

 

[113-R18-AI/ML] – Taesang (Qualcomm)

Email discussion on AI/ML

-        To be used for sharing updates on online/offline schedule, details on what is to be discussed in online/offline sessions, tdoc number of the moderator summary for online session, etc

 

R1-2305325         Updated TR 38.843 including RAN1 agreements until RAN1#112              Qualcomm Incorporated

R1-2305326         Updated TR 38.843 including RAN1 agreements from RAN1#112bis-e   Qualcomm Incorporated

R1-2306235         TR38.843 v0.1.0: Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface           Rapporteur (Qualcomm) (rev of R1-2306170)

Decision: TR 38.843 v0.1.0 in R1-2306235 is endorsed.

Note: TR 38.843 v0.x.y for incorporating further modifications will be discussed in RAN1 before RAN#101.

9.2.1       General aspects of AI/ML framework

Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.

 

R1-2304370         Discussion on common AI/ML characteristics and operations              FUTUREWEI

R1-2304418         Discussion on general aspects of AI/ML framework   Continental Automotive Technologies GmbH

R1-2304438         Discussion on general aspects of AI/ML framework   Panasonic

R1-2304470         Discussions on AI/ML framework  vivo

R1-2304533         Discussion on general aspects of common AI PHY framework              ZTE

R1-2304549         Discussion on general aspects of AIML framework    Spreadtrum Communications

R1-2304652         Discussion on general aspects of AI/ML framework   Huawei, HiSilicon

R1-2304679         Discussion on general aspects of AI/ML LCM            NYCU, NTPU

R1-2304680         Further discussion on the general aspects of ML for Air-interface              Nokia, Nokia Shanghai Bell

R1-2304721         Discussion on AI/ML general framework     CATT

R1-2304748         Discussion on general aspects of AIML framework    Ericsson

R1-2304763         Discussion on general aspects of AI/ML framework   Fujitsu

R1-2304778         Discussion on general aspects of AI/ML framework   InterDigital, Inc.

R1-2304841         On General Aspects of AI/ML Framework   Google

R1-2304892         Views on the general aspects of AI/ML framework    xiaomi

R1-2304947         General Aspects of AI/ML framework          AT&T

R1-2304991         Discussion on general aspects of AI ML framework   NEC

R1-2305014         Considerations on general aspects on AI-ML framework              CAICT

R1-2305028         Discussion on general aspects of AI/ML framework   KDDI Corporation

R1-2305031         Considerations on common AI/ML framework           Sony

R1-2305084         Discussion on general aspects of AI/ML framework   CMCC

R1-2305159         General aspects of AI and ML framework for NR air interface              NVIDIA

R1-2305174         General aspects of AI/ML framework for NR air interface              Intel Corporation

R1-2305197         General aspects of AI/ML framework           Fraunhofer IIS, Fraunhofer HHI

R1-2305201         Discussion on general aspects of AI/ML framework   Lenovo

R1-2305233         Discussion on general aspect of AI/ML framework     Apple

R1-2305295         General aspects on AI/ML framework           LG Electronics

R1-2305327         General aspects of AI/ML framework           Qualcomm Incorporated

R1-2305458         On general aspects of AI/ML framework      OPPO

R1-2305481         Discussion on AI/ML Model Life Cycle Management Rakuten Mobile, Inc

R1-2305504         General aspects of AI/ML framework and evaluation methodology       Samsung

R1-2305590         Discussion on general aspects of AI/ML framework   NTT DOCOMO, INC.

R1-2305690         Considering on system architecture for general AI/ML framework               TCL Communication Ltd.

R1-2305691         Discussions on General Aspects of AI/ML Framework              Indian Institute of Tech (M), IIT Kanpur

R1-2305696         Discussion on general aspects of AI/ML LCM            MediaTek Inc.

R1-2305788         Discussion on general aspects of AI/ML framework for NR air interface ETRI

 

R1-2306048         Summary#1 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Monday session

Agreement

Consider at least the following aspects and if applicable, the corresponding potential specification impact related to data collection:

 

Agreement

For model identification of UE-side or UE-part of two-sided models, categorize model identification types as follows, and further study relevant aspects, necessity, and specification impact (if any).

 

 

R1-2306049         Summary#2 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Wednesday session

Agreement

For functionality/model-ID based LCM,

·        Once functionalities/models are identified, the same or similar procedures may be used for their activation, deactivation, switching, fallback, and monitoring.

 

R1-2306050         Summary#3 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Thursday session

Agreement

Once models are identified, UE can indicate supported AI/ML model IDs for a given AI/ML-enabled Feature/FG in a UE capability report as starting point.

·       FFS: applicability to model identification, Type A, type B1 and type B2

o   FFS: Using a procedure other than UE capability report

·       Note: model identification using capability report is not precluded for type B1 and type B2

Agreement

Study how to handle the impact of UE’s internal conditions such as memory, battery, and other hardware limitations on functionality/model operations and AI/ML-enabled Feature.

Note: it does not preclude any existing solutions.

 

 

R1-2306051         Summary#4 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Friday session

Agreement

Revise the following terminologies for model activation, model deactivation, and model switching as follows

Model activation

Enable an AI/ML model for a specific function AI/ML-enabled feature

Model deactivation

Disable an AI/ML model for a specific function AI/ML-enabled feature

Model switching

Deactivating a currently active AI/ML model and activating a different AI/ML model for a specific function AI/ML-enabled feature

 

Agreement

In model delivery/transfer Case z4, the “known model structure” means an exact model structure as has been previously identified between NW and UE and for which the UE has explicitly indicated its support.

In model delivery/transfer Case z5, the “unknown model structure” means any other model structure not covered in z4, including any model structure that is only partially known.

 

Agreement

For the purpose of activation/selection/switching of UE-side models/UE-part of two-sided models /functionalities (if applicable), study necessity, feasibility and potential specification impact for methods to assess/monitor the applicability and expected performance of an inactive model/functionality, including the following examples:

·       Assessment/Monitoring based on the additional conditions associated with the model/functionality

·       Assessment/Monitoring based on input/output data distribution

·       Assessment/Monitoring using the inactive model/functionality for monitoring purpose and measuring the inference accuracy

·       Assessment/Monitoring based on past knowledge of the performance of the same model/functionality (e.g., based on other UEs)

FFS: Requirements for the assessment/monitoring to be reliable (e.g., sufficient data coverage during evaluation)

FFS: Additional aspects specific to the case where the inactive model has never been activated before, if any.

 

 

R1-2306052         Final summary of General Aspects of AI/ML Framework              Moderator (Qualcomm)

9.2.2       AI/ML for CSI feedback enhancement

9.2.2.1       Evaluation on AI/ML for CSI feedback enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2304371         Discussion and evaluation of AI/ML for CSI feedback enhancement       FUTUREWEI

R1-2304471         Evaluation on AI/ML for CSI feedback enhancement vivo

R1-2304521         Evaluations of AI-CSI        Ericsson

R1-2304534         Evaluation on AI CSI feedback enhancement              ZTE

R1-2304550         Discussion on evaluation on AIML for CSI feedback enhancement       Spreadtrum Communications, BUPT

R1-2304653         Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2304681         Evaluation of ML for CSI feedback enhancement       Nokia, Nokia Shanghai Bell

R1-2305975         Evaluation on AI/ML for CSI feedback enhancement CATT              (rev of R1-2304722)

R1-2305981         Evaluation on AI/ML for CSI feedback enhancement Fujitsu              (rev of R1-2304764)

R1-2304779         Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2304813         Evaluation on AI/ML for CSI feedback enhancement Intel Corporation

R1-2304842         On Evaluation of AI/ML based CSI Google

R1-2304854         Evaluation on AI/ML for CSI feedback enhancement China Telecom

R1-2304893         Discussion on evaluation on AI/ML for CSI feedback enhancement       xiaomi

R1-2304948         Discussion on AI/ML for CSI feedback enhancement AT&T

R1-2305015         Some discussions on evaluation on AI-ML for CSI feedback              CAICT

R1-2305085         Discussion on evaluation on AI/ML for CSI feedback enhancement       CMCC

R1-2305160         Evaluation of AI and ML for CSI feedback enhancement              NVIDIA

R1-2305202         Evaluation on AI/ML for CSI feedback         Lenovo

R1-2305234         Evaluation for AI/ML based CSI feedback enhancement              Apple

R1-2305296         Evaluation on AI/ML for CSI feedback enhancement LG Electronics

R1-2305328         Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated

R1-2305459         Evaluation methodology and results on AI/ML for CSI feedback enhancement       OPPO

R1-2305505         Views on Evaluation of AI/ML for CSI feedback enhancement              Samsung

R1-2305591         Discussion on evaluation on AI/ML for CSI feedback enhancement       NTT DOCOMO, INC.

R1-2305655         Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.

R1-2305688         Evaluation on AI/ML for CSI feedback enhancement Mavenir

R1-2305730         Evaluation and preliminary results on AI/ML-based CSI feedback enhancement       BJTU

R1-2305789         Evaluation on AI/ML for CSI feedback enhancement ETRI

R1-2305894         Evaluation of AI/ML for CSI feedback Enhancement CEWiT

 

R1-2306058         Summary#1 for CSI evaluation of [113-R18-AI/ML] Moderator (Huawei)

R1-2306059         Summary#2 for CSI evaluation of [113-R18-AI/ML]              Moderator (Huawei)

From Tuesday session

Observation

For the AI/ML based CSI prediction, till the RAN1#113 meeting, compared to the Benchmark#1 of the nearest historical CSI, in terms of SGCS, from UE speed perspective, in general the gain of AI/ML based solution is related with the UE speed:

 

Observation

For the evaluation of AI/ML based CSI compression, till the RAN1#113 meeting, compared to the benchmark, in terms of SGCS,

 

Observation

For the evaluation of AI/ML based CSI compression, till the RAN1#113 meeting, compared to the benchmark, in terms of mean UPT under FTP traffic, more gains are achieved by Max rank 2 compared with Max rank 1 in general:

 

Observation

For the evaluation of AI/ML based CSI compression, till the RAN1#113 meeting, compared to the benchmark, in terms of 5% UPT under FTP, more gains are achieved by Max rank 2 compared with Max rank 1 in general:

 

Observation

For the generalization verification of AI/ML based CSI compression over various deployment scenarios, till the RAN1#113 meeting, compared to the generalization Case 1 where the AI/ML model is trained with dataset subject to a certain deployment scenario#B and applied for inference with a same deployment scenario#B,

 

Observation

For the generalization verification of AI/ML based CSI compression over various UE distributions, till the RAN1#113 meeting, compared to the generalization Case 1 where the AI/ML model is trained with dataset subject to a certain UE distribution#B and applied for inference with a same UE distribution#B,

 

Observation

For the scalability verification of AI/ML based CSI compression over various Tx port numbers, till the RAN1#113 meeting, compared to the generalization Case 1 where the AI/ML model is trained with dataset subject to a certain Tx port number#B and applied for inference with a same Tx port number#B,

 

Observation

For the AI/ML based CSI prediction, till the RAN1#113 meeting, in terms of mean UPT, gains are observed compared to both Benchmark#1 of the nearest historical CSI and Benchmark#2 of a non-AI/ML based CSI prediction approach:

 

Observation

For the AI/ML based CSI prediction, till the RAN1#113 meeting, in terms of 5% UPT, gains are observed compared to both Benchmark#1 of the nearest historical CSI and Benchmark#2 of a non-AI/ML based CSI prediction approach:

 

Observation

For the generalization verification of AI/ML based CSI prediction over various UE speeds, till the RAN1#113 meeting, compared to the generalization Case 1 where the AI/ML model is trained with dataset subject to a certain UE speed#B and applied for inference with a same UE speed#B,

 

 

R1-2306060         Summary#3 for CSI evaluation of [113-R18-AI/ML]              Moderator (Huawei)

From Wednesday session

Observation

For the comparison of quantization methods for CSI compression, till the RAN1#113 meeting, training non-aware quantization (Case 1) is in general inferior to the training aware quantization (Case 2-1/2-2), and may lead to lower performance than the benchmark.

 

Observation

For the comparison of quantization methods for CSI compression, till the RAN1#113 meeting, in general vector quantization (VQ) has comparable performance with scalar quantization (SQ):

 

Agreement

For the intermediate KPI monitoring of CSI compression, for the FFS issue on the value of threshold of  KPIth_1 in Option 1, the candidate threshold values are set as 0.02, 0.05 and 0.1.

 

Agreement

For the intermediate KPI monitoring of CSI compression, for the FFS issue on the value of threshold of KPIth_2 and KPIth_3  in Option 2, consider KPIth_2   = KPIth_3.

 

Agreement

For the evaluation of training Type 3 under CSI compression, for the benchmark case (1-on-1 joint training) for performance comparison, the structures for the pair of NW part model/UE part model for the new case are the same with the Type 3 case to be compared.

·       E.g., if the Type 3 is Transformer#1 for NW part model and CNN#1 for UE part model, then the benchmark case for performance comparison is also Transformer#1 for NW part model and CNN#1 for UE part model with joint training.

 

R1-2306061         Summary#4 for CSI evaluation of [113-R18-AI/ML]              Moderator (Huawei)

From Thursday session

Agreement

For the intermediate KPI monitoring of CSI compression, between the two options to calculate KPIdiff achieved in the RAN1#112bis-e meeting, as baseline for calibration purpose, consider Option 1 (Gap between KPIActual and KPIGenie).

 

Observation

For the evaluation of AI/ML based CSI compression, till the RAN1#113 meeting, compared to the benchmark, in terms of mean UPT under full buffer, more gains are achieved by Max rank 2 compared with Max rank 1 in general:

 

Observation

For the evaluation of AI/ML based CSI compression, till the RAN1#113 meeting, compared to the benchmark, in terms of 5% UPT under full buffer,

 

Agreement

For the evaluation of the R16 eType II-like codebook based high resolution quantization of the ground-truth CSI in the CSI compression for AI/ML training, regarding the evaluation of new values of eType II parameters, consider the legacy values of PC6&PC8 as the baseline/lower-bound of performance comparison.

 

Observation

For the generalization verification of AI/ML based CSI compression over various carrier frequencies, till the RAN1#113 meeting, compared to the generalization Case 1 where the AI/ML model is trained with dataset subject to a certain carrier frequency#B and applied for inference with a same carrier frequency#B,

 

 

R1-2306062         Summary#5 for CSI evaluation of [113-R18-AI/ML]              Moderator (Huawei)

From Friday morning

Working Assumption

For the template of Table 1. Evaluation results for CSI compression of 1-on-1 joint training without model generalization/scalability, update the entry of CQI determination method(s) to include also the RI determination:

Common description

Input type

Output type

Quantization /dequantization method

Rank/layer adaptation settings for rank>1

CQI/RI determination method(s) for AI/ML (Option 1a/1b/1c/2a/2b, etc.)

 

Observation

For the evaluation of NW first separate training with dataset sharing manner for CSI compression, till the RAN1#113 meeting, for the pairing of 1 NW to 1 UE (Case 1), as compared to 1-on-1 joint training between the NW part model and the UE part model,

 

Observation

For the evaluation of NW first separate training with dataset sharing manner for CSI compression, till the RAN1#113 meeting, for the pairing of 1 NW to 1 UE (Case 1), as compared to the case where the same set of dataset is applied for training the NW part model and training the UE part model, if the dataset#2 applied for training the UE part model is a subset of the dataset#1 applied for training the NW part model,

 

Observation

For the evaluation of UE first separate training with dataset sharing manner for CSI compression, till the RAN1#113 meeting, for the pairing of 1 NW to 1 UE (Case 1), as compared to 1-on-1 joint training between the NW part model and the UE part model,

 

 

Final summary in R1-2306063.

9.2.2.2       Other aspects on AI/ML for CSI feedback enhancement

Including potential specification impact.

 

R1-2304372         Discussion on other aspects of AI/ML for CSI feedback enhancement       FUTUREWEI

R1-2304472         Other aspects on AI/ML for CSI feedback enhancement              vivo

R1-2304522         Discussion on AI-CSI        Ericsson

R1-2304535         Discussion on other aspects for AI CSI feedback enhancement              ZTE

R1-2304551         Discussion on other aspects on AIML for CSI feedback              Spreadtrum Communications

R1-2304654         Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2304682         Other aspects on ML for CSI feedback enhancement  Nokia, Nokia Shanghai Bell

R1-2304723         Discussion on AI/ML for CSI feedback enhancement CATT

R1-2304765         Views on specification impact for CSI feedback enhancement              Fujitsu

R1-2304780         Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2304814         Discussion on AI/ML for CSI feedback         Intel Corporation

R1-2304843         On Enhancement of AI/ML based CSI           Google

R1-2304855         Discussion on AI/ML for CSI feedback enhancement China Telecom

R1-2304869         Discussion on AI/ML for CSI feedback enhancement Panasonic

R1-2304894         Further discussion on specification impact for CSI feedback based on AI/ML             xiaomi

R1-2304949         Discussion on AI/ML for CSI feedback enhancement AT&T

R1-2305016         Discussions on AI-ML for CSI feedback       CAICT

R1-2305032         Considerations on CSI measurement enhancements via AI/ML              Sony

R1-2305058         Discussion on AI/ML based methods for CSI feedback enhancement       Fraunhofer IIS, Fraunhofer HHI

R1-2305070         Discussion on AI/ML for CSI feedback enhancement NEC

R1-2305086         Discussion on other aspects on AI/ML for CSI feedback enhancement       CMCC

R1-2305161         AI and ML for CSI feedback enhancement   NVIDIA

R1-2305203         Further aspects of AI/ML for CSI feedback  Lenovo

R1-2305235         Discussion on other aspects of AI/ML for CSI enhancement              Apple

R1-2305297         Other aspects on AI/ML for CSI feedback enhancement           LG Electronics

R1-2305329         Other aspects on AI/ML for CSI feedback enhancement              Qualcomm Incorporated

R1-2305460         On sub use cases and other aspects of AI/ML for CSI feedback enhancement       OPPO

R1-2305506         Discussion on potential specification impact for CSI feedback enhancement       Samsung

R1-2305592         Discussion on other aspects on AI/ML for CSI feedback enhancement       NTT DOCOMO, INC.

R1-2305656         Other aspects on AI/ML for CSI feedback enhancement              MediaTek Inc.

R1-2305763         Discussion on AI/ML for CSI feedback enhancement ITL

R1-2305790         Discussion on other aspects on AI/ML for CSI feedback enhancement       ETRI

R1-2305873         Other aspects on AI/ML for CSI feedback enhancement           IIT Kanpur

 

R1-2306042         Summary #1 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Monday session

Agreement

·       Type 2 Joint training of the two-sided model at network side and UE side, respectively.

o   Note: Joint training includes both simultaneous training and sequential training, in which the pros and cons could be discussed separately

o   Note: Sequential training includes starting with UE side training, or starting with NW side training

 

R1-2306043         Summary #2 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Tuesday session

Agreement

In CSI compression using two-sided model use case, for discussion of training collaboration type 1,

 

Agreement

In CSI compression using two-sided model use case, further study the necessity, complexity, overhead, latency and potential specification impact on ground truth CSI report for NW side data collection for model performance monitoring, including:  

 

 

R1-2306044         Summary #3 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Wednesday session

Agreement

In CSI compression using two-sided model use case, for the study of UCI format, consider the legacy CSI reporting principle with CSI Part 1 and Part 2 as a starting point, where Part 1 has a network configured fixed size and Part 2 size is dynamic, determined by information in Part 1.

 

Agreement

In CSI compression using two-sided model use case, further study the feasibility of at least the following methods to support codebook subset restriction:

·       input-CSI-NW/output-CSI-UE is in angular-delay domain, beam restriction can be based on legacy SD basis vector-based input CSI in angular domain.

·       FFS amplitude restriction

·       FFS if input-CSI-NW/output-CSI-UE is in spatial-frequency domain

 

R1-2306045         Summary #4 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Thursday session

Agreement

In CSI compression using two-sided model use case, further study the applicability and potential specification impact for CSI configuration and report: 

·       For network to indicate CSI reporting related information, gNB can indicate the UE with the one or more of following information:

o   Information indicating CSI payload size

o   Information indicating quantization method/granularity.

o   Rank restriction

o   Other payload related aspects

·       For UE determination/reporting of the actual CSI payload size, UE reports related information as configured by the NW.

 

Agreement

In CSI compression using two-sided model use case, further study feasibility and procedure to align the information that enables the UE to select a CSI generation model(s) compatible with the CSI reconstruction model(s) used by the gNB.

 

 

Final summary in R1-2306047.

9.2.3       AI/ML for beam management

9.2.3.1       Evaluation on AI/ML for beam management

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2304373         Discussion and evaluation of AI/ML for beam management              FUTUREWEI

R1-2304439         Discussion for evaluation on AI/ML for beam management              InterDigital, Inc.

R1-2304473         Evaluation on AI/ML for beam management vivo

R1-2304536         Evaluation on AI beam management             ZTE

R1-2304552         Evaluation on AI/ML for beam management Spreadtrum Communications

R1-2304655         Evaluation on AI/ML for beam management Huawei, HiSilicon

R1-2304683         Evaluation of ML for beam management      Nokia, Nokia Shanghai Bell

R1-2304724         Evaluation on AI/ML for beam management CATT

R1-2304749         Evaluation of AIML for beam management  Ericsson

R1-2304766         Evaluation on AI/ML for beam management Fujitsu

R1-2304820         Evaluations for AI/ML Beam Management   Intel Corporation

R1-2304844         On Evaluation of AI/ML based Beam Management    Google

R1-2304856         Evaluation on AI/ML for beam management China Telecom

R1-2304895         Evaluation on AI/ML for beam management xiaomi

R1-2304979         Evaluation on AI ML for Beam management              RAN1, Comba

R1-2305017         Some discussions on evaluation on AI-ML for Beam management              CAICT

R1-2305087         Discussion on evaluation on AI/ML for beam management              CMCC

R1-2305162         Evaluation of AI and ML for beam management         NVIDIA

R1-2305204         Evaluation on AI/ML for beam management Lenovo

R1-2305236         Evaluation for AI/ML based beam management enhancements              Apple

R1-2305298         Evaluation on AI/ML for beam management LG Electronics

R1-2305330         Evaluation on AI/ML for beam management Qualcomm Incorporated

R1-2305461         Evaluation methodology and results on AI/ML for beam management        OPPO

R1-2305982         Evaluation on AI/ML for Beam management              Samsung              (rev of R1-2305507)

R1-2305593         Discussion on evaluation on AI/ML for beam management              NTT DOCOMO, INC.

R1-2305657         Evaluation on AI/ML for beam management MediaTek Inc.

R1-2305731         Evaluation methodology and results on AI/ML for beam management        BJTU

R1-2305791         Evaluation on AI/ML for beam management ETRI

R1-2305895         Evaluation on AI/ML for beam management CEWiT

 

R1-2306000         Feature lead summary #0 evaluation of AI/ML for beam management       Moderator (Samsung)

From Monday session

Agreement

o    Option A (baseline): the Top-1 genie-aided Tx beam is the Tx beam that results in the largest L1-RSRP over all Tx and Rx beams

o    Option B(optional), the Top-1 genie-aided Tx beam is the Tx beam that results in the largest L1-RSRP over all Tx beams with specific Rx beam(s)

§  Companies report the specific Rx beam(s)

§  Note: specific Rx beams are subset of all Rx beams

 

Observation

For BMCase-1 and for a fixed Set B pattern option, Set B pattern will affect the beam prediction accuracy with AI/ML for both DL Tx beam prediction and beam pair prediction.

 

Agreement

 

 

R1-2306001         Feature lead summary #1 evaluation of AI/ML for beam management        Moderator (Samsung)

R1-2306002         Feature lead summary #2 evaluation of AI/ML for beam management       Moderator (Samsung)

From Wednesday session

Observation

At least for BM-Case1 for inference of DL Tx beam with L1-RSRPs of all beams in Set B, existing quantization granularity of L1-RSRP (i.e., 1dB for the best beam, 2dB for the difference to the best beam) causes [a minor loss] in beam prediction accuracy compared to unquantized L1-RSRPs of beams in Set B:

 

Observation

The following generalization aspects were evaluated for BMCase-1 and/or BMCase-2,

Companies have provided evaluation results which show that Case 3 and/or Case 2A can provide better performance than Case 2. In most of the cases/evaluations, Case 3 has performance degradation than Case 1. From the evaluation results [from 2 sources: Samsung, Nokia] for [scenario with various UE distribution], Case 3 may have similar or slightly higher performance than Case 1.

 

 

R1-2306003         Feature lead summary #3 evaluation of AI/ML for beam management       Moderator (Samsung)

From Thursday session

Observation

§  UE average throughput

§  UE 5%ile throughput

 

·       evaluation results from [4 sources: Futurewei, MediaTek, CEWiT, DoCoMo] indicate that, AI/ML can achieve [about 50%] beam prediction accuracy

·       evaluation results from [4 sources: Apple, Qualcomm, Intel, vivo] indicate that, AI/ML can achieve [about 60%~70%] beam prediction accuracy

·       evaluation results from [5 sources: CMCC, Lenovo, ZTE, Fujitsu, OPPO] indicate that, AI/ML can achieve [about 70%~80%] beam prediction accuracy.

·       evaluation results from [2 sources: Nokia, Samsung, vivo] indicate that, AI/ML can achieve [more than 80%] beam prediction accuracy

o   Note: [One source: vivo] reported that, AI/ML can achieve [89%] beam prediction accuracy with the measurements from the best Rx beam based on the best Tx beam in Set A, and AI/ML can achieve [67.6%] beam prediction accuracy with the measurements from the best Rx beam of on the best Tx beam in Set B.

·       Non-AI baseline Option 2 (exhaustive beam sweeping in Set B of beams) can achieve [about 12.5%] beam prediction accuracy 

·       evaluation results from [5 sources: Apple, Intel, vivo, Lenovo, Fujitsu] indicate that, AI/ML can achieve [70%-80%] beam prediction accuracy

o   wherein [1 source: vivo] assumed the L1-RSRP of the Top-1 predicted beam is measured with the best Rx beam searched from the best Tx beam in set B.

·       evaluation results from [1 source: OPPO] indicate that, AI/ML can achieve [80%-90%] beam prediction accuracy

·       evaluation results from [4 sources: Nokia, Qualcomm, Samsung, ZTE] indicate that, AI/ML can achieve [more than 90%] beam prediction accuracy

·       evaluation results from [3 sources: Futurewei, MediaTek, CEWiT] indicate that, AI/ML can achieve [about 70%~ 80%] beam prediction accuracy

·       evaluation results from [5 sources: CMCC, Intel, Qualcomm, vivo, Fujitsu] indicate that, AI/ML can achieve [80%~90%] beam prediction accuracy

·       evaluation results from [3 sources: Nokia, OPPO, Samsung] indicate that, AI/ML can achieve [90%] beam prediction accuracy for Top-2 DL Tx beam.

§  Average L1-RSRP difference of Top-1 predicted beam

·       evaluation results from [7 sources: Nokia, Qualcomm, OPPO, Samsung, CEWiT, ZTE, vivo] indicate that it can be [below or about 1dB]

·       evaluation results from [3 sources: Fujitsu, DoCoMo, Lenovo] indicate that it can be [1dB~2dB]

·       evaluation results from [1 source: vivo] indicates that it can be [3.4dB] with the assumption that the L1-RSRP of the Top-1 predicted beam is measured with the best Rx beam searched from the best Tx beam in set B

§  Average predicted L1-RSRP difference of Top-1 beam

·       evaluation results from [5 sources: vivo, Lenovo, OPPO, ZTE, Ericsson] indicates that it can be [0.8~1.5dB]

·       Note that [4 sources: vivo, Lenovo, ZTE, Ericsson] assumed that all the L1-RSRPs of Set A of beams are used as the label in AI/ML training phase (e.g., regression AI/ML model) and [1 source: OPPO] assumed that only the L1-RSRP of the Top-1 beam in Set A is used as the label in training phase and the result is [0.82 dB].

§  UE average throughput

·       evaluation results from [1 source: Nokia] indicates that AI/ML achieves [98%] of the UE average throughput of the BMCase1 baseline option 1 (exhaustive search over Set A beams).

·       evaluation results from [1 source: MediaTek] indicates that AI/ML achieves [85%] of the UE average throughput of the BMCase1 baseline option 1 (exhaustive search over Set A beams).

§  UE 5%ile throughput

·       evaluation results from [1 source: Nokia] indicates that, AI/ML achieves 84% of the UE 5%ile throughput of the BMCase1 baseline option (exhaustive search over Set A beams).

·       evaluation results from [1 source: MediaTek] indicates that, AI/ML achieves 70% of the UE 5%ile throughput of the BMCase1 baseline option (exhaustive search over Set A beams).

 

Observation

§  evaluation results [from 3 sources: Nokia, Ericsson, Intel] indicate that, AI/ML can achieve [more than 80%] beam prediction accuracy [from 5 sources: Samsung, Huawei, MediaTek, Qualcomm, Intel] indicate that, AI/ML can achieve [more than 55%] beam prediction accuracy

·       [One source: Intel] reported [more than 80%] beam prediction accuracy with 100% outdoor UEs, and [more than 60%] beam prediction accuracy with 20% outdoor UEs.

·       Evaluation results from [1 source: Samsung] shows that, with limited measurements (e..g, [1 or 4]) of narrow beams in Set A[=32], AI/ML can increase [15% or 30%] beam prediction accuracy [respectively] compared with [55%] beam prediction accuracy with measurement of wide beams only.

§  evaluation results [from 4 sources: Nokia, Ericsson, Qualcomm, Intel] indicate that, AI/ML can achieve [more than 85%] beam prediction accuracy

§  evaluation results [from 3 sources: Huawei, Samsung, Intel] indicate that, AI/ML can achieve [57%~77%] beam prediction accuracy

·       [One source: Intel] reported [more than 86%] beam prediction accuracy with 100% outdoor UEs, and [more than 70%] beam prediction accuracy with 20% outdoor UEs.

§  evaluation results [from 3 sources: Nokia, Ericsson, Intel] indicate that, AI/ML can achieve [more than 95%] beam prediction accuracy

§  evaluation results [from 3 sources: Huawei, Samsung, MediaTek] indicate that, AI/ML can achieve [85~94%] beam prediction accuracy

·       evaluation results from [1 source: Qualcomm] indicate that Top-5 DL beam prediction accuracy can be [more than 90%].

§  evaluation results [from 3 sources: Nokia, Samsung, Qualcomm] indicate that, the average L1-RSRP difference can be [less or about 1dB]

§  evaluation results [from 1 source: Nokia] indicate that, AI/ML achieves [99%] of the UE average throughput of the BMCase1 baseline option 1 (exhaustive search over Set A beams)

§  evaluation results [from 1 source: Nokia] indicate that, AI/ML achieves [94%] of the of the BMCase1 baseline option 1(exhaustive search over Set A beams)

 

Observation

At least for BM-Case1 when Set B is a subset of Set A, and for DL Tx beam prediction, with the measurements of the “best” Rx beam with exhaustive beam sweeping for each model input sample, AI/ML provides the better performance than with measurements of random Rx beam(s).

·       Evaluation results from [8 sources: vivo, Nokia, Fujitsu, Samsung Lenovo, Huawei/HiSi, Ericsson, MediaTek] show [25%~50%] degradation with random Rx beam(s) comparing with the “best” Rx beam in terms of Top-1 prediction accuracy.

·       Evaluation results from [1 source: CATT] show about 6% degradation with measurement of random Rx compared with measurement of best Rx in term of Top-1 beam prediction accuracy.

Comparing performance with non-AI baseline option 2 (based on the measurement from Set B of beams), with measurements of random Rx beam(s) as AI/ML inputs:

·       Evaluation results from [5 sources: MediaTek, Fujitsu, vivo, Nokia Samsung] show that AI/ML can still provide [7%~44%] beam prediction accuracy gain in terms of Top-1 beam prediction accuracy.

Note: In both training and inference, measurements of random Rx beams are used as AI/ML inputs.

 

Observation

At least for BM-Case1 for inference of DL Tx beam with L1-RSRPs of all beams in Set B,

 

 

Final summary in R1-2306199.

9.2.3.2       Other aspects on AI/ML for beam management

Including potential specification impact.

 

R1-2304374         Discussion on other aspects of AI/ML for beam management              FUTUREWEI

R1-2304379         Discussion on other aspects of AI/ML beam management              New H3C Technologies Co., Ltd.

R1-2304440         Discussion for other aspects on AI/ML for beam management              InterDigital, Inc.

R1-2304474         Other aspects on AI/ML for beam management          vivo

R1-2304537         Discussion on other aspects for AI beam management              ZTE

R1-2304553         Other aspects on AI/ML for beam management          Spreadtrum Communications

R1-2304656         Discussion on AI/ML for beam management Huawei, HiSilicon

R1-2304684         Other aspects on ML for beam management Nokia, Nokia Shanghai Bell

R1-2304725         Discussion on AI/ML for beam management CATT

R1-2304750         Discussion on AI/ML for beam management Ericsson

R1-2304767         Discussion for specification impacts on AI/ML for beam management        Fujitsu

R1-2304821         Other Aspects on AI/ML for Beam Management        Intel Corporation

R1-2304845         On Enhancement of AI/ML based Beam Management              Google

R1-2304896         Potential specification impact on AI/ML for beam management              xiaomi

R1-2304992         Discussion on AI/ML for beam management NEC

R1-2305018         Discussions on AI-ML for Beam management            CAICT

R1-2305033         Consideration on AI/ML for beam management         Sony

R1-2305088         Discussion on other aspects on AI/ML for beam management              CMCC

R1-2305163         AI and ML for beam management  NVIDIA

R1-2305205         Further aspects of AI/ML for beam management        Lenovo

R1-2305237         Discussion on other aspects of AI/ML based beam management enhancements      Apple

R1-2305299         Other aspects on AI/ML for beam management          LG Electronics

R1-2305331         Other aspects on AI/ML for beam management          Qualcomm Incorporated

R1-2305462         Other aspects of AI/ML for beam management           OPPO

R1-2305508         Discussion on potential specification impact for beam management        Samsung

R1-2305594         Discussion on other aspects on AI/ML for beam management              NTT DOCOMO, INC.

R1-2305658         Other aspects on AI/ML for beam management          MediaTek Inc.

R1-2305757         Discussion on AI/ML for beam management Panasonic

R1-2305792         Discussion on other aspects on AI/ML for beam management              ETRI

 

R1-2306068         Summary#1 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Monday session

Agreement

For BM-Case1 and BM-Case2 with a UE-side AI/ML model, regarding performance monitoring, study potential spec impact(s) from the following aspects in addition to those included in previous agreements:

 

 

R1-2306069         Summary#2 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Tuesday session

Agreement

For BM-Case1 and BM-Case2 with a UE-side AI/ML model, regarding performance monitoring, study the necessity and potential spec impact(s) of the mechanism that facilitate UE to detect whether the functionality/model is suitable or no longer suitable.

 

 

R1-2306070         Summary#3 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Wednesday session

Conclusion

For the study of DL beam pair prediction of BM-Case1 and BM-Case2 with a UE-side AI/ML model, RAN1 has no consensus to support the reporting of the predicted Rx beam(s) (e.g., Rx beam ID, Rx beam angle information, etc) from UE to network.

 

Agreement

For BM-Case2, study necessity, benefit(s) and potential specification impact from the following additional aspects for AI model inference:

·       Reporting information about measurements of multiple past time instances in one reporting instance for BM-Case2

o   Note: only applicable to network-side AI/ML model

·       Note: The potential performance gains of measurement reporting should be justified by considering UCI payload overhead

 

Agreement

For BM-Case1 and BM-Case2, study necessity, benefit(s) and potential specification impact from the following additional aspects for AI model inference:

 

Agreement

Regarding data collection for BM-Case1 and BM-Case2 with a UE-side AI/ML model, study the benefits, necessity and potential specification impact of the following aspect on top of those we have agreed in previous meeting:

 

R1-2306071         Summary#4 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Thursday session

Agreement

For BM-Case1 and BM-Case2 with a UE-side AI/ML model, study the necessity and potential BM-specific conditions/additional conditions for functionality(ies) and/or model(s) at least from the following aspects:

·       information regarding model inference

·       Set A / Set B configuration

·       performance monitoring

·       data collection

·       assistance information

 

Final summary in R1-2306072.

9.2.4       AI/ML for positioning accuracy enhancement

9.2.4.1       Evaluation on AI/ML for positioning accuracy enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2304339         Evaluation of AI/ML for Positioning Accuracy Enhancement              Ericsson

R1-2304475         Evaluation on AI/ML for positioning accuracy enhancement              vivo

R1-2304538         Evaluation on AI positioning enhancement   ZTE

R1-2304657         Evaluation on AI/ML for positioning accuracy enhancement              Huawei, HiSilicon

R1-2304685         Evaluation of ML for positioning accuracy enhancement              Nokia, Nokia Shanghai Bell

R1-2304726         Evaluation on AI/ML for positioning enhancement     CATT

R1-2304768         Discussions on evaluation results of AIML positioning accuracy enhancement       Fujitsu

R1-2304846         On Evaluation of AI/ML based Positioning  Google

R1-2304857         Evaluation on AI/ML for positioning accuracy enhancement              China Telecom

R1-2304897         Evaluation on AI/ML for positioning accuracy enhancement              xiaomi

R1-2305019         Some discussions on evaluation on AI-ML for positioning accuracy enhancement       CAICT

R1-2305089         Discussion on evaluation on AI/ML for positioning accuracy enhancement       CMCC

R1-2305123         Evaluation on AI/ML for positioning accuracy enhancement              InterDigital, Inc.

R1-2305164         Evaluation of AI and ML for positioning enhancement              NVIDIA

R1-2305206         Discussion on AI/ML Positioning Evaluations            Lenovo

R1-2305973         Evaluation on AI/ML for positioning accuracy enhancement              Apple     (rev of R1-2305238)

R1-2305300         Evaluation on AI/ML for positioning accuracy enhancement    LG Electronics

R1-2305332         Evaluation on AI/ML for positioning accuracy enhancement              Qualcomm Incorporated

R1-2305463         Evaluation methodology and results on AI/ML for positioning accuracy enhancement       OPPO

R1-2305509         Evaluation on AI/ML for Positioning            Samsung

R1-2305659         Evaluation of AIML for Positioning Accuracy Enhancement              MediaTek Inc.

R1-2305689         Evaluation of AI/ML for Positioning Accuracy Enhancement              Indian Institute of Tech (M), IIT Kanpur

R1-2305896         Evaluation on AI/ML for Positioning Accuracy Enhancement              CEWiT

 

R1-2306054         Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement      Moderator (Ericsson)

From Monday session

Agreement

For the evaluation of AI/ML based positioning, the study of model input due to different number of TRPs include the following approaches. Proponent of each approach provide analysis for model performance, signaling overhead (including training data collection and model inference), model complexity and computational complexity.

·       Approach 1: Model input size stays constant as NTRP=18. The number of TRPs (N’TRP) that provide measurements to model input varies. When N’TRP < NTRP, the remaining (NTRP - N’TRP) TRPs do not provide measurements to model input, i.e., measurement value is set such that the (NTRP - N’TRP) TRPs do not affect model output.

o   Approach 1-A. The set of TRPs (N’TRP) that provide measurements is fixed.

o   Approach 1-B. The set of TRPs (N’TRP) that provide measurements can change dynamically.

o   Note: for Approach 1, one model is provided to cover the entire evaluation area.

·       Approach 2: The TRP dimension of model input is equal to the number of TRPs (N’TRP) that provide measurements as model input. When N’TRP < NTRP, the remaining (NTRP - N’TRP) TRPs are ignored by the given model.

o   Approach 2-A. The set of active TRPs (N’TRP) that provide measurements is fixed.

o   Approach 2-B. The set of active TRPs (N’TRP) that provide measurements can change dynamically.

o   For Approach 2, if Nmodel (Nmodel >1) models are provided to cover the entire evaluation area, the total complexity (model complexity is the summation of the Nmodel models.

Note:  The agreement is updated from agreement made in RAN1#112bis.

 

Observation

For AI/ML based positioning, the positioning accuracy is affected by the training dataset size for a given UE distribution area (or equivalently, sample density in #samples/m2), when the UE is distributed uniformly in training data collection.

 

 

R1-2306055         Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement      Moderator (Ericsson)

From Tuesday session

Observation

For AI/ML assisted positioning with timing information (e.g., ToA) as model output, evaluation of the following generalization aspects show that:

 

Observation

For AI/ML assisted positioning, evaluation results demonstrate that for the generalization aspects of:

if the positioning accuracy would deteriorate when the AI/ML model is trained with dataset of one deployment scenario and tested with dataset of a different deployment scenario, the positioning accuracy on the test dataset can be improved by better training dataset construction and/or model fine-tuning/re-training.

Note: ideal model training and switching may provide the upper bound of achievable performance when the AI/ML model needs to handle different deployment scenarios.

 

Observation

For AI/ML assisted positioning with timing information (e.g., ToA) as model output, based on evaluation results of network synchronization error in the range of 0-50 ns, when the model is trained by a dataset with network synchronization error t1 (ns) and tested in a deployment scenario with network synchronization error t2 (ns), for a given t1,

·       For a case evaluated by a given source, the positioning accuracy of cases with t2 smaller than t1 is better than the cases with t2 equal to t1. For example,

o   For the case of (t1, t2)=(50ns, 20~25ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(50ns, 20~25ns) is 0.75~0.85 times that of (t1, t2)=(50ns, 50ns).

o   For the case of (t1, t2)=(50ns, 0ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(50ns, 0ns) is 0.76~0.80 times that of (t1, t2)=(50ns, 50ns).

·       For a case evaluated by a given source, the positioning accuracy of cases with t2 greater than t1 is worse than the cases with t2 equal to t1. The larger the difference between t1 and t2, the more the degradation. For example,

o   For the case of (t1, t2)=(0ns, 10ns), evaluation results submitted to RAN1#113 show the positioning error of (0ns, 10ns) is 1.16~2.81 times that of (0ns, 0ns).

o   For the case of (t1, t2)=(0ns, 20~25ns), evaluation results submitted to RAN1#113 show the positioning error of (0ns, 50ns) is 2.19~10.11 times that of (0ns, 0ns).

o   For the case of (t1, t2)=(0ns, 50ns), evaluation results submitted to RAN1#113 show the positioning error of (0ns, 50ns) is 9.68~31.95 times that of (0ns, 0ns).

Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.

 

Observation

For AI/ML assisted positioning with timing information (e.g., ToA) as model output, based on evaluation results of timing error in the range of 0-50 ns, when the model is trained by a dataset with UE/gNB RX and TX timing error t1 (ns) and tested in a deployment scenario with UE/gNB RX and TX timing error t2 (ns), for a given t1,

·       For a case evaluated by a given source, the positioning accuracy of cases with t2 smaller than t1 is better than the cases with t2 equal to t1. For example,

o   For the case of (t1, t2)=(50ns, 20~25ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(50ns, 20~25ns) is 0.75~0.96 times that of (t1, t2)=(50ns, 50ns).

o   For the case of (t1, t2)=(50ns, 0ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(50ns, 0ns) is 0.76~0.95 times that of (t1, t2)=(50ns, 50ns).

·       For a case evaluated by a given source, the positioning accuracy of cases with t2 greater than t1 is worse than the cases with t2 equal to t1. The larger the difference between t1 and t2, the more the degradation. For example,

o   For the case of (t1, t2)=(0ns, 10ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(0ns, 10ns) is 1.34~2.30 times that of (t1, t2)=(0ns, 0ns).

o   For the case of (t1, t2)=(0ns, 20~25ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(0ns, 20~25ns) is 5.66~13.0 times that of (t1, t2)=(0ns, 0ns).

o   For the case of (t1, t2)=(0ns, 50ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(0ns, 50ns) is 10.62~51.52 times that of (t1, t2)=(0ns, 0ns).

Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.

 

Observation

In evaluation of AI/ML assisted positioning with timing information (e.g., TOA) as model output, for L in the range of 0.25m to 5m, the timing (e.g., TOA) estimation error and positioning error increases approximately in proportion to L, where L (in meters) is the standard deviation of truncated Gaussian distribution of the ground truth label error.

 

 

R1-2306056         Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement      Moderator (Ericsson)

From Wednesday session

Observation

For AI/ML assisted positioning, the positioning accuracy at model inference is affected by the type of model input.  Evaluation results submitted to RAN1#113 show that if changing model input type while holding other parameters (e.g., Nt, N't, Nport, N'TRP) the same,

 

Observation

For AI/ML assisted positioning, with Nt consecutive time domain samples used as model input, evaluation results submitted to RAN1#113 show that when CIR or PDP are used as model input, using different Nt while holding other parameters the same, 

 

Observation

For AI/ML assisted positioning, when N't time domain samples with the strongest power are selected as model input, evaluation results submitted to RAN1#113 show that for model input of CIR or PDP and Nt=256, using different N't while holding other parameters the same,

 

Observation

Evaluation shows that AI/ML assisted positioning with timing information (e.g., ToA) as model output is robust to certain label error based on evaluation results of L in the range of (0, 5) meter. The exact range of label error that can be tolerated depends on the positioning accuracy requirement, where tighter positioning accuracy requirement demands smaller label error.

 

Observation

Evaluation shows that direct AI/ML positioning is robust to certain label error based on evaluation results of L in the range of (0, 5) meter. The exact range of label error that can be tolerated depends on the positioning accuracy requirement, where tighter positioning accuracy requirement demands smaller label error.

 

Observation

For AI/ML based positioning, evaluation results show that semi-supervised learning is helpful for improving the positioning accuracy when the same amount of ideal labelled data is used for supervised learning, and the number of ideal labelled data is limited.

 

 

R1-2306057         Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement      Moderator (Ericsson)

From Thursday session

Observation

For data collection of training dataset for AI/ML based positioning, for a given deployment scenario (e.g., InF-scenario, clutter parameter, drop) and with uniform UE distribution, the required sample density (e.g., #samples/m2) for achieving a given positioning accuracy target varies with AI/ML design choices including:

·       different positioning approach (direct AI/ML, AI/ML-assisted),

·       different type of model input,

·       the size of model input,

·       AI/ML complexity (model complexity and computational complexity).

 

Observation

Evaluation results demonstrate that the performance of AI/ML positioning with the evaluation area as the convex hull of the horizontal BS deployment shows better performance than that with the whole hall area as evaluation area. This is due to: (a) convex hull case has higher sample density if using the same training dataset size, since convex hull has smaller UE distribution area; (b) for whole hall area, the UEs located outside the convex hull have diminished access to TRPs.

 

Observation

For the evaluation of direct AI/ML positioning, with Nt consecutive time domain samples used as model input, evaluation results submitted to RAN1#113 show that when CIR, PDP, or DP is used as model input, using different Nt while holding other parameters the same, 

 

Observation

For direct AI/ML positioning, the evaluation of positioning accuracy at model inference is affected by the type of model input and AI/ML complexity. For a given AI/ML model design, there is a tradeoff between model input, AI/ML complexity (model complexity and computational complexity), and positioning accuracy. Evaluation results submitted up to RAN1#113 show that if changing model input type while holding other parameters (e.g., Nt, N't, Nport, N'TRP) the same,

 

Observation

For the evaluation of direct AI/ML positioning, when N't time domain samples with the strongest power are selected as model input, evaluation results submitted to RAN1#113 show that:

9.2.4.22       Other aspects on AI/ML for positioning accuracy enhancement

Including potential specification impact.

 

R1-2304340         Other Aspects of AI/ML Based Positioning Enhancement              Ericsson

R1-2304476         Other aspects on AI/ML for positioning accuracy enhancement              vivo

R1-2304539         Discussion on other aspects for AI positioning enhancement              ZTE

R1-2304554         Discussion on other aspects on AIML for positioning accuracy enhancement       Spreadtrum Communications

R1-2304658         Discussion on AI/ML for positioning accuracy enhancement              Huawei, HiSilicon

R1-2304686         Other aspects on ML for positioning accuracy enhancement              Nokia, Nokia Shanghai Bell

R1-2304727         Discussion on AI/ML for positioning enhancement    CATT

R1-2304746         Discussion of other aspects on AI/ML for positioning accuracy enhancement       NYCU

R1-2304769         Discussions on specification impacts for AIML positioning accuracy enhancement       Fujitsu

R1-2304847         On Enhancement of AI/ML based Positioning            Google

R1-2304898         Views on the other aspects of AI/ML-based positioning accuracy enhancement       xiaomi

R1-2304921         Other aspects on AI-ML for positioning accuracy enhancement              Baicells

R1-2305001         Discussion on AI/ML for positioning accuracy enhancement              NEC

R1-2305020         Discussions on AI-ML for positioning accuracy enhancement              CAICT

R1-2305034         On other aspects of AI/ML for positioning accuracy enhancement              Sony

R1-2305090         Discussion on other aspects on AI/ML for positioning accuracy enhancement       CMCC

R1-2305124         Designs and potential specification impacts of AIML for positioning          InterDigital, Inc.

R1-2305165         AI and ML for positioning enhancement       NVIDIA

R1-2305198         On potential AI/ML solutions for positioning              Fraunhofer IIS, Fraunhofer HHI

R1-2305207         AI/ML Positioning use cases and associated Impacts  Lenovo

R1-2305239         On Other aspects on AI/ML for positioning accuracy enhancement       Apple

R1-2305301         Other aspects on AI/ML for positioning accuracy enhancement              LG Electronics

R1-2305333         Other aspects on AI/ML for positioning accuracy enhancement              Qualcomm Incorporated

R1-2305464         On sub use cases and other aspects of AI/ML for positioning accuracy enhancement       OPPO

R1-2305510         Discussion on potential specification impact for Positioning              Samsung

R1-2305595         Discussion on other aspects on AI/ML for positioning accuracy enhancement       NTT DOCOMO, INC.

R1-2305660         Other Aspects on AI ML Based Positioning Enhancement              MediaTek Inc.

 

R1-2305992         FL summary #1 of other aspects on AI/ML for positioning accuracy enhancement    Moderator (vivo)

From Tuesday session

Observation

Regarding ground truth label generation for AI/ML based positioning, multiple sources submitted evaluation results on the impact of ground truth label for training obtained by existing NR RAT-dependent positioning methods. Feasibility and performance benefit of utilizing ground truth label for training estimated by existing NR RAT-dependent positioning methods are observed.

 

 

R1-2305993         FL summary #2 of other aspects on AI/ML for positioning accuracy enhancement       Moderator (vivo)

R1-2305994         FL summary #3 of other aspects on AI/ML for positioning accuracy enhancement    Moderator (vivo)

From Wednesday session

Agreement

Regarding ground truth label generation for AI/ML based positioning, the following options of entity to generate ground truth label are identified when beneficial and necessary (e.g., limited PRU availability)

 

 

R1-2306180         FL summary #4 of other aspects on AI/ML for positioning accuracy enhancement       Moderator (vivo)

R1-2306206         FL summary #5 of other aspects on AI/ML for positioning accuracy enhancement    Moderator (vivo)

From Friday session

Agreement

For assisted AI/ML positioning with UE-assisted (Case 2a) and NG-RAN node assisted positioning (Case 3a), at least the following types of model inference output are identified as candidates providing performance benefits

 

Agreement

Regarding AI/ML model monitoring for AI/ML based positioning, the following entities are identified as candidates to derive monitoring metric in addition to entities from previous agreement

 

Agreement

Regarding monitoring for AI/ML based positioning, at least the following monitoring methods with potential specification impact are identified


 RAN1#114

9.2      Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface

Please refer to RP-221348 for detailed scope of the SI.

 

R1-2308543         Session notes for 9.2 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface) Ad-hoc Chair (CMCC)

Endorsed and contents incorporated below.

 

[114-R18-AI/ML] – Taesang (Qualcomm)

Email discussion on AI/ML

-        To be used for sharing updates on online/offline schedule, details on what is to be discussed in online/offline sessions, tdoc number of the moderator summary for online session, etc

 

R1-2307914         Updated TR 38.843 including RAN1 agreements from RAN1#113          Qualcomm Incorporated

Agreement

·       TR in R1-2307914 is endorsed as starting point.

From Friday session

Post RAN1#114 Email discussion plan:

·       For target TR 1.0.0: Aug.28~Sept.1

·       For reply Part A of RAN2 LS: Aug.28~Sept.1

·       For reply Part B of RAN2 LS: Sept.18~Sept.28

9.2.1       General aspects of AI/ML framework

Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.

 

R1-2306430         Discussion on general aspects of AI/ML framework              FUTUREWEI

R1-2306474         General aspects of AI and ML framework for NR air interface              NVIDIA

R1-2306510         Discussion on general aspects of AI/ML framework   Huawei, HiSilicon

R1-2306556         Discussions on general aspects of AI/ML framework Ruijie Network Co. Ltd

R1-2306636         Discussion on general aspects of AIML framework    Spreadtrum Communications

R1-2306694         Discussion on general aspects of AI-ML framework   Continental Automotive Technologies GmbH

R1-2306739         Discussions on AI/ML framework  vivo

R1-2306794         Discussion on general aspects of common AI PHY framework              ZTE

R1-2306851         General aspects of AI/ML framework for NR air interface              Intel Corporation

R1-2306881         Discussion on general aspects of AI/ML framework   Panasonic

R1-2306902         Considerations on common AI/ML framework           Sony

R1-2306928         Discussion on general aspects of AIML framework    Ericsson

R1-2306956         On General Aspects of AI/ML Framework   Google

R1-2307013         General aspects on AI/ML framework           LG Electronics

R1-2307075         Discussion on general aspects of AI/ML framework   CATT

R1-2307122         Discussion on general aspects of AI ML framework   NEC

R1-2307153         Discussion on general aspects of AI/ML framework   Fujitsu

R1-2307182         Discussion on general aspects of AI/ML framework   CMCC

R1-2307234         General aspects of AI/ML framework           Fraunhofer IIS, Fraunhofer HHI

R1-2307237         Further discussion on the general aspects of ML for Air-interface              Nokia, Nokia Shanghai Bell

R1-2307250         Discussion on general aspects of AI/ML framework   InterDigital, Inc.

R1-2307267         Discussion on general aspect of AI/ML framework     Apple

R1-2307332         Discussion on general aspects of AI/ML framework   KDDI Corporation

R1-2307374         Views on the general aspects of AI/ML framework    xiaomi

R1-2307465         Discussion on general aspects of AI/ML framework   NTT DOCOMO, INC.

R1-2307563         On general aspects of AI/ML framework      OPPO

R1-2307598         Discussion on general aspects of AI/ML framework   NYCU, NTPU

R1-2307667         General aspects of AI/ML framework and evaluation methodology       Samsung

R1-2307805         Discussion on general aspects of AI/ML framework   Lenovo

R1-2307816         Discussion on general aspects of AI/ML framework   Sharp              (Late submission)

R1-2307861         Considerations on general aspects on AI-ML framework              CAICT

R1-2307886         General Aspects of AI/ML framework          AT&T

R1-2307915         General aspects of AI/ML framework           Qualcomm Incorporated

R1-2308019         Considering on system architecture for AI/ML framework deployment          TCL

R1-2308091         On General Aspects of AI/ML Framework   IIT Kanpur, Indian Institute of Tech (M)

R1-2308131         Discussions on General Aspects of AI/ML Framework              Indian Institute of Technology Madras (IITM), IIT Kanpur, CEWiT

 

R1-2308286         Summary#1 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Monday session

Agreement

Conclude that applicable functionalities/models can be reported by UE.

 

Agreement

·       Once models are identified via Type A, UE can indicate supported AI/ML model IDs for a given AI/ML-enabled Feature/FG in a UE capability report as starting point.

o   FFS: Using a procedure other than UE capability report

·       Note: The support and applicability of model identification Type A is a separate discussion.

 

R1-2308287         Summary#2 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Tuesday session

Agreement

·       When a model of a known structure at UE (e.g., Case z4) is transferred from NW, the new model being identified (e.g., via Type B2) has the same structure as an previously identified model at the Network and UE.

o   Note: the need of model transfer will be discussed separately.

Agreement

·       Model ID in RAN1 discussion may or may not be globally unique, and different types of model IDs may be created for a single model for various LCM purposes.

·       Note: Details can be studied in the WI phase.

 

R1-2308288         Summary#3 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Wednesday session

Agreement

RAN1 confirms Assumption 2 in RAN2 LS.

Assumption 2:

For the latency requirement of data collection, RAN2 assumes:

§   For all types of offline model training (i.e., UE- /NW-/ two-sided model training), there is no latency requirement for data collection

§   For model inference, when required data comes from other entities, there is a latency requirement for data collection

§   For (real-time) model monitoring, when required monitoring data (e.g., performance metric) comes from other entities, there is a latency requirement for data collection.

 

Agreement

RAN1 confirms RAN2’s Assumption 3 for CSI compression, CSI prediction, beam prediction and Positioning use cases.

For positioning, it is noted that existing specification supports DL PRS measurement and UE positioning in both RRC_CONNECTED and RRC_INACTIVE state.

Assumption 3:

RAN2 assumes that the analysis/selection of the data collection frameworks should focus on the RRC_CONNECTED state (for both data generation and reporting). Analysis and potential enhancement of the non-connected state can be revisited when needed.

 

 

For the data generation entity and termination entity deployed at different entities, RAN1 revised the RAN2's assumptions as follows:

Agreement (For Replying RAN2 LS)

 

 

R1-2308289         Summary#4 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

R1-2308290         Summary#5 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Thursday session

Agreement (For Replying RAN2 LS)

Note: For CSI prediction, inform RAN2 related conclusions/agreements/observations the entities for data generation and termination are under RAN1 discussioni.

Note: RAN1 did not reply on the notes that, regarding training, different NW entities for training (gNB/CN/LMF/OAM) as it is out of RAN1’s expertise that RAN1 cannot confirm. RAN1 simply denoted them as NW in the reply.

Note: For assistance information, inform RAN2 related conclusions/agreements/observations. RAN1 did not reply on assistance information.

Note: RAN1’s understanding is that “input data” in the LS refers to essential inputs for the given use case and does not include assistance information that a model may additionally use as model input. 

Note: RAN1 notes that, regarding model monitoring, performance metric is not a part of data collection but should rather be discussed as a procedure for performance monitoring. Instead, data needed for performance metric calculation (if needed) should be captured in the data collection requirement.

 

Observation

Scenario/configuration specific (including site-specific configuration/channel conditions) models may provide performance benefits in some studied use cases (i.e., when a single model cannot generalize well to multiple scenarios/configurations/sites).

·       At least, when UE has limitation to store all related models, model delivery/transfer, if feasible, to UE may be beneficial, at the cost of overhead/latency associated with model delivery/transfer.

·       Note: On-device Finetuning/retraining, if feasible, of a single model may be an alternative to model delivery/transfer.

·       Note: a single model may generalize well in some studied use cases.

·       Note: Model transfer/delivery to UE may also face challenges, e.g., proprietary issues /burdens in some scenarios

 

R1-2308291         Summary#6 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Friday session

Observation

·       Model transfer/delivery of an unknown structure at UE has more challenges related to feasibility (e.g. UE implementation feasibility) compared to delivery/transfer of a known structure at UE.

Agreement (For Replying RAN2 LS)

For CSI prediction enhancement and beam management use cases:

·       For model training, training data can be generated by UE/gNB and terminated at gNB/OAM/OTT server.

·       For NW-sided model inference, input data can be generated by UE and terminated at gNB.

·       For UE-side model inference, input data/assistance information is internally available at UE can be generated by gNB and terminated at UE.

·       For performancemodel monitoring at the NW side, calculated performance metrics (if needed) or data needed for performance metric calculation (if needed) can be generated by UE and terminated at gNB.

Agreement

To reply RAN2 LS, for

Assumption 1:

RAN2 assumes that for the data collection in some scenarios (e.g., internal data up to implementation or the existing data are enough), possibly no RAN2 specification effort is needed in some scenarios, e.g. (not exhaustive):

§   For model inference of the UE-sided model, input data for model inference is available inside the UE.

§   For UE-side (real-time) monitoring of the UE-sided model, performance metrics are available inside the UE. UE can independently monitor a model's performance without any data input from NW.

RAN1 informs RAN2:

·       For model inference of the UE-sided model, input data for model inference is available inside the UE.

·       For (real-time) model UE-side performance monitoring of the UE-sided model, in some cases, e.g., for CSI prediction and beam prediction, performance metrics are available inside the UE. UE can independently monitor a model's performance without any data input from NW.

o   Note: RAN1’s understanding is that “data input” in the above refers to essential inputs for the given use case and does not include assistance information that a model may additionally use for performance metric calculation.

Note: RAN1’s understanding is that “input data” in the LS refers to essential inputs for the given use case and does not include assistance information that a model may additionally use as model input. RAN1 did not reply on assistance information.

 

 

Final summary in R1-2308292.

9.2.2       AI/ML for CSI feedback enhancement

9.2.2.1       Evaluation on AI/ML for CSI feedback enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2306431         Discussion on evaluation of AI/ML for CSI feedback enhancement       FUTUREWEI

R1-2306475         Evaluation of AI and ML for CSI feedback enhancement              NVIDIA

R1-2306511         Evaluation on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2306606         Evaluation of AI-CSI         Ericsson

R1-2306637         Discussion on evaluation on AIML for CSI feedback enhancement       Spreadtrum Communications,BUPT

R1-2306740         Evaluation on AI/ML for CSI feedback enhancement vivo

R1-2306795         Evaluation on AI CSI feedback enhancement              ZTE

R1-2306809         Evaluation on AI/ML for CSI feedback enhancement China Telecom

R1-2306832         Evaluations on AI/ML for CSI feedback       Intel Corporation

R1-2306957         On Evaluation of AI/ML based CSI Google

R1-2307076         Evaluation and discussion on AI/ML for CSI feedback enhancement       CATT

R1-2307154         Evaluation on AI/ML for CSI feedback enhancement Fujitsu

R1-2307183         Discussion on evaluation on AI/ML for CSI feedback enhancement       CMCC

R1-2307238         Evaluation of ML for CSI feedback enhancement       Nokia, Nokia Shanghai Bell

R1-2307251         Evaluation on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2307268         Evaluation for AI/ML based CSI feedback enhancement              Apple

R1-2307375         Discussion on evaluation on AI/ML for CSI feedback enhancement       xiaomi

R1-2307466         Discussion on evaluation on AI/ML for CSI feedback enhancement       NTT DOCOMO, INC.

R1-2307564         Evaluation methodology and results on AI/ML for CSI feedback enhancement       OPPO

R1-2307668         Views on Evaluation of AI/ML for CSI feedback enhancement              Samsung

R1-2307728         Evaluation on AIML for CSI feedback enhancement  Mavenir

R1-2307739         Evaluation on AI/ML for CSI feedback enhancement ETRI

R1-2307806         Evaluation on AI/ML for CSI feedback         Lenovo

R1-2307916         Evaluation on AI/ML for CSI feedback enhancement Qualcomm Incorporated

R1-2307999         Evaluation of AI/ML for CSI feedback Enhancement CEWiT

R1-2308053         Evaluation on AI/ML for CSI feedback enhancement MediaTek Inc.

R1-2308158         Observations on AI/ML for CSI Feedback Enhancement              Indian Institute of Technology Madras (IITM), IIT Kanpur, CEWiT

 

R1-2308340         Summary#1 for CSI evaluation of [114-R18-AI/ML]              Moderator (Huawei)

From Monday session

Agreement

For the evaluation of CSI enhancements, update the observations drawn in previous meetings to Updated Observation 2.1.8, Updated Observation 2.1.10, Updated Observation 2.1.12, Observation 2.1.15, and Updated Observation 2.1.20 in R1-2308340.

 

Agreement

For the evaluation of CSI enhancements, update the observations drawn in previous meetings to Updated Observation 2.1.1, Updated Observation 2.1.4, Updated Observation 2.1.5, Observation 2.1.9, and Updated Observation 2.1.11 in R1-2308340.

Note: for update observation 2.1.4, for Rank 2, 2 sources [xiaomi, MediaTek] observe the performance gain of 2% at CSI overhead B (medium overhead).

Note: for Updated Observation 2.1.11, Scalability of AI/ML based CSI compression over various CSI payload sizes can also be achieved by finetuning models on CSI payload size#B, showing loss [0%~-2.2%] by 2 sources [Ericsson, vivo].

 

 

R1-2308341         Summary#2 for CSI evaluation of [114-R18-AI/ML] Moderator (Huawei)

R1-2308342         Summary#3 for CSI evaluation of [114-R18-AI/ML]              Moderator (Huawei)

From Wednesday session

Agreement

For the evaluation of CSI enhancements, update the observations drawn in previous meetings to Updated Observation 2.1.2, Updated Observation 2.1.3, Updated Observation 2.1.6, Updated Observation 2.1.7, Updated Observation 2.1.13, and Updated Observation 2.1.14 in R1-2308342.

 

Observation

For the evaluation of high resolution quantization of the ground-truth CSI for the training of CSI compression, compared to the upper-bound of Float32, quantized high resolution ground-truth CSI can achieve significant overhead reduction with minor performance loss if the parameters are appropriately selected.

 

Observation

For the generalization verification of AI/ML based CSI compression over various TxRU mappings, compared to the generalization Case 1 where the AI/ML model is trained with dataset subject to a certain TxRU mapping#B and applied for inference with a same TxRU mapping#B,

 

Observation

For the evaluation of NW first separate training with dataset sharing manner for CSI compression, for the pairing of 1 NW to 1 UE (Case 1), as compared to 1-on-1 joint training between the NW part model and the UE part model,

 

Observation

For the evaluation of UE first separate training with dataset sharing manner for CSI compression, for the pairing of 1 NW to 1 UE (Case 1), as compared to 1-on-1 joint training between the NW part model and the UE part model,

 

Observation

For the evaluation of NW first separate training with dataset sharing manner for CSI compression, for the pairing between 1 UE part model and N>1 separate NW part models (Case 3), when taking 1-on-1 joint training between the NW part model and the UE part model as benchmark, larger performance loss is observed in general than the case of NW first separate training with 1 UE part model and 1 NW part model pairing (Case 1):

 

 

R1-2308343         Summary#4 for CSI evaluation of [114-R18-AI/ML] Moderator (Huawei)

R1-2308344         Summary#5 for CSI evaluation of [114-R18-AI/ML]              Moderator (Huawei)

From Thursday session

Observation

For the evaluation of intermediate KPI based monitoring mechanism for CSI compression, for monitoring Case 1, in terms of monitoring accuracy with Option 1,

 

Observation

For the evaluation of intermediate KPI based monitoring mechanism for CSI compression, for Case 2, in terms of monitoring accuracy with Option 1,

 

Observation

For the evaluation of Type 2 training between 1 NW part model and M>1 separate UE part models (Case 2), as compared to joint training between 1 NW part model and the 1 UE part model,

 

Observation

For the evaluation of Type 2 training between 1 UE part model and N>1 separate NW part models (Case 3), as compared to joint training between 1 NW part model and the 1 UE part model,

 

Observation

For the evaluation of UE first separate training with dataset sharing manner for CSI compression, for the pairing between M>1 separate UE part models and 1 NW part model (Case 2), when taking 1-on-1 joint training between the NW part model and the UE part model as benchmark, larger performance loss is observed in general than the case of UE first separate training with 1 UE part model and 1 NW part model pairing (Case 1):

 

Observation

For the AI/ML based CSI prediction, compared with the benchmark of the nearest historical CSI:

 

Observation

For the AI/ML based CSI prediction, compared to the Benchmark#1 of the nearest historical CSI, in terms of SGCS, from UE speed perspective, in general the gain of AI/ML based solution is related with the UE speed:

 

Observation

For the AI/ML based CSI prediction, in terms of mean UPT, gains are observed compared to both Benchmark#1 of the nearest historical CSI and Benchmark#2 of a non-AI/ML based CSI prediction approach:

 

Observation

For the AI/ML based CSI prediction, in terms of 5% UPT, gains are observed compared to both Benchmark#1 of the nearest historical CSI and Benchmark#2 of a non-AI/ML based CSI prediction approach:

 

Observation

For the evaluation of AI/ML based CSI compression, compared to the benchmark, in terms of CSI feedback reduction,

 

Observation

For the scalability verification of AI/ML based CSI compression over various bandwidths, compared to the generalization Case 1 where the AI/ML model is trained with dataset subject to a certain bandwidth#B and applied for inference with a same bandwidth#B,

 

Observation

For the AI/ML based CSI prediction, compared to the Benchmark#1 of the nearest historical CSI, in terms of SGCS, from observation window length perspective, in general the gain of AI/ML based solution is slightly increased with the increase of the length for the observation window:

 

Observation

For the AI/ML based CSI prediction, compared to the Benchmark#1 of the nearest historical CSI, in terms of SGCS/NMSE, from prediction window length perspective, in general the gain of AI/ML based solution is related with the prediction length in terms of the distance to the applicable time of the predicted CSI:

 

Observation

For the evaluation of CSI compression, for the type of AI/ML model input (for CSI generation part)/output (for CSI reconstruction part), a vast majority of companies adopt precoding matrix as model input/output.

·         Note: For the evaluations of CSI compression with 1-on-1 joint training, 22 sources [Huawei, Nokia, Futurewei, Lenovo, ZTE, vivo, OPPO, Spreadtrum, Fujitsu, NTT DOCOMO, Xiaomi, Qualcomm, Intel, InterDigital, CATT, Apple, China Telecom, MediaTek, BJTU, ETRI, CMCC, Ericsson] take precoding matrix without angular-delay domain conversion as the model input/output; 2 sources [Ericsson, Samsung] takes precoding matrix with angular-delay domain representation as the model input/output. No company submitted explicit channel matrix as input.

 

Final summary in R1-2308345.

R1-2308682         Evaluation results of AI/ML for CSI feedback enhancement              Moderator (Huawei)

9.2.2.2       Other aspects on AI/ML for CSI feedback enhancement

Including potential specification impact. Consider RAN agreement from RAN#100 in RP-231481 (proposal 1).

 

R1-2306432         Discussion on other aspects of AI/ML for CSI feedback enhancement       FUTUREWEI

R1-2306476         AI and ML for CSI feedback enhancement   NVIDIA

R1-2306512         Discussion on AI/ML for CSI feedback enhancement Huawei, HiSilicon

R1-2306605         Discussions on AI-CSI       Ericsson

R1-2306638         Discussion on other aspects on AIML for CSI feedback              Spreadtrum Communications

R1-2306741         Other aspects on AI/ML for CSI feedback enhancement              vivo

R1-2306796         Discussion on other aspects for AI CSI feedback enhancement              ZTE

R1-2306833         Discussion on AI/ML for CSI feedback         Intel Corporation

R1-2306898         Discussion on AI/ML for CSI feedback         SEU

R1-2306903         Considerations on CSI measurement enhancements via AI/ML              Sony

R1-2306945         Discussion on AI/ML for CSI feedback enhancement NEC

R1-2306958         On Enhancement of AI/ML based CSI           Google

R1-2307004         Discussion on AI/ML for CSI feedback enhancement Panasonic

R1-2307014         Other aspects on AI/ML for CSI feedback enhancement           LG Electronics

R1-2307077         Discussion on other aspects for AI/ML CSI feedback enhancement       CATT

R1-2307155         Views on specification impact for CSI feedback enhancement              Fujitsu

R1-2307184         Discussion on other aspects on AI/ML for CSI feedback enhancement       CMCC

R1-2307239         Other aspects on ML for CSI feedback enhancement  Nokia, Nokia Shanghai Bell

R1-2307252         Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2307269         Discussion on other aspects of AI/ML for CSI enhancement              Apple

R1-2307376         Remained issues discussion on specification impact for CSI feedback based on AI/ML xiaomi

R1-2307467         Discussion on other aspects on AI/ML for CSI feedback enhancement       NTT DOCOMO, INC.

R1-2307565         On sub use cases and other aspects of AI/ML for CSI feedback enhancement       OPPO

R1-2307618         Discussion on AI/ML for CSI feedback enhancement China Telecom

R1-2307634         Discussion on AI/ML based methods for CSI feedback enhancement       Fraunhofer IIS, Fraunhofer HHI

R1-2307669         Discussion on potential specification impact for CSI feedback enhancement       Samsung

R1-2307740         Discussion on other aspects on AI/ML for CSI feedback enhancement       ETRI

R1-2307807         Further aspects of AI/ML for CSI feedback  Lenovo

R1-2307862         Discussions on AI-ML for CSI feedback       CAICT

R1-2307887         Discussion on AI/ML for CSI feedback enhancement AT&T

R1-2307917         Other aspects on AI/ML for CSI feedback enhancement              Qualcomm Incorporated

R1-2307996         Varying CSI feedback granularity based on channel conditions              Rakuten Symphony

R1-2308020         Discussions on CSI measurement enhancement for AI/ML communication   TCL

R1-2308052         Other aspects on AI/ML for CSI feedback enhancement              MediaTek Inc.

R1-2308090         Other aspects on AI/ML for CSI feedback enhancement           IIT Kanpur, Indian Institute of Tech (M)

R1-2308099         Other aspects on AI/ML for CSI feedback enhancement              ITL

R1-2308159         Discussions on Other Aspects on AI/ML for CSI Feedback Enhancement       Indian Institute of Technology Madras (IITM), IIT Kanpur, CEWiT

 

R1-2308243         Summary #1 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Monday session

Agreement

·       In CSI compression using two-sided model use case, do not capture the column “Type 1 training at UE/NW/ neutral site with 3GPP transparent model delivery to UE and NW respectively” in the table that summarizes training collaboration Types 1.

o   Note: both collaboration level y and z are considered for pros and cons of training types

·       In CSI compression using two-sided model use case, the following table capture the pros/cons of training collaboration type 1:

   Training types

 

Characteristics

 

Type1: NW side

Type 1: UE side

Unknown model structure at UE

Known model structure at UE

Unknown model structure at NW

Known model structure at NW

 

Note: capture unknown model structure with sequential retraining in the unknown model structure at UE/NW column as a note whenever needed.

 

 

R1-2308244         Summary #2 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Wednesday session

Observation

In CSI prediction using UE sided model use case, at least the following aspects have been proposed by companies on data collection, including:

 

 

R1-2308245         Summary #3 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Thursday session

Agreement

For CSI prediction using UE side model use case, at least the following aspects have been proposed by companies on performance monitoring for functionality-based LCM:

 

Observation

In CSI compression using two-sided model use case, at least the following options have been proposed by companies to define the pairing information used to enable the UE to select a CSI generation model(s) that is compatible with the CSI reconstruction model(s) used by the gNB:

·         Option 1: The pairing information is in the forms of the CSI reconstruction model ID that NW will use.

·         Option 2: The pairing information is in the forms of the CSI generation model ID that the UE will use.

·         Option 3: The pairing information is in the forms of the paired CSI generation model and CSI reconstruction model ID.

·         Option 4: The pairing information is in the forms of by the dataset ID during type 3 sequential training.

·         Option 5: The pairing information is in the forms of a training session ID to a prior training session (e.g., API) between NW and UE.

·         Option 6: The pairing information is up to UE/NW offline co-engineering alignment, transparent to 3GPP specification.

·         Note: the disclosure of the vendor information during the model pairing procedure and model identification procedure should be considered.

·         Note: If each UE side model is compatible with all NW side model, the information is not needed for the UE.

·         Note: Above does not imply there is a need for a central entity for defining/storing/maintaining the IDs. 

 

R1-2308246         Summary #4 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

Presented in Friday session.

 

 

Final summary in R1-2308247.

9.2.3       AI/ML for beam management

9.2.3.1       Evaluation on AI/ML for beam management

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2306420         Evaluation methodology and results on AI/ML for beam management        Keysight Technologies UK Ltd

R1-2306433         Discussion on evaluation of AI/ML for beam management              FUTUREWEI

R1-2306477         Evaluation of AI and ML for beam management         NVIDIA

R1-2306513         Evaluation on AI/ML for beam management Huawei, HiSilicon

R1-2306639         Evaluation on AI/ML for beam management Spreadtrum Communications

R1-2308346         Discussion for evaluation on AI/ML for beam management              InterDigital, Inc.  (rev of R1-2306689)

R1-2306742         Evaluation on AI/ML for beam management vivo

R1-2306797         Evaluation on AI beam management             ZTE

R1-2306808         Evaluation methodology and results on AI/ML for beam management        BJTU

R1-2306810         Evaluation on AI/ML for beam management China Telecom

R1-2306856         Evaluation of AI/ML for Beam Management Intel Corporation

R1-2306930         Evaluation of AIML for beam management  Ericsson

R1-2306959         On Evaluation of AI/ML based Beam Management    Google

R1-2307015         Evaluation on AI/ML for beam management LG Electronics

R1-2307078         Evaluation and discussion on AI/ML for beam management              CATT

R1-2307156         Evaluation on AI/ML for beam management Fujitsu

R1-2307185         Discussion on evaluation on AI/ML for beam management              CMCC

R1-2307240         Evaluation of ML for beam management      Nokia, Nokia Shanghai Bell

R1-2308260         Evaluation for AI/ML based beam management enhancements              Apple     (rev of R1-2307270)

R1-2307377         Evaluation on AI/ML for beam management xiaomi

R1-2307468         Discussion on evaluation on AI/ML for beam management              NTT DOCOMO, INC.

R1-2307566         Evaluation methodology and results on AI/ML for beam management        OPPO

R1-2307670         Evaluation on AI/ML for Beam management              Samsung

R1-2307741         Evaluation on AI/ML for beam management ETRI, LG Uplus

R1-2307808         Evaluation on AI/ML for beam management Lenovo

R1-2308259         Evaluation on AI/ML for beam management Qualcomm Incorporated        (rev of R1-2308201, rev of R1-2307918)

R1-2308000         Evaluation on AI/ML for beam management CEWiT

R1-2308054         Evaluation on AI/ML for beam management MediaTek Inc.

R1-2308188         Evaluation on AI/ML for beam management New H3C Technologies Co., Ltd.

R1-2308327         Evaluation on AI/ML for beam management BUPT    (rev of R1-2308200)

 

R1-2308318         Feature lead summary #0 evaluation of AI/ML for beam management        Moderator (Samsung)

R1-2308319         Feature lead summary #1 evaluation of AI/ML for beam management       Moderator (Samsung)

From Wednesday session

Observation

Note: This is an update from the corresponding observation in RAN 1#113

·       For BM-Case1 DL Tx beam prediction, when Set B is a subset of Set A, AI/ML can provide good beam prediction performance with less measurement/RS overhead comparing to using all measurements of Set A (which provides 100% beam prediction performance as non-AI baseline Option 1) without considering generalization aspects with the measurements from the best Rx beam without UE rotation.

o   (A)With measurements of fixed Set B of beams that of 1/4 of Set A of beams

§  Top-1 DL Tx beam prediction accuracy:

§  Top-1 DL Tx beam with 1dB margin:

§  Top-K(=2) DL Tx beam prediction accuracy

-        evaluation results from [5 sources: Samsung, CATT, Fujitsu, Spreadtrum, Nokia] indicate that Top-2 DL beam prediction accuracy can be more than 95%

-        evaluation results from [2 sources: Lenovo, Ericsson] indicate that Top-3 DL beam prediction accuracy can be more than 95%

-        evaluation results from [3 sources: BUPT, Xiaomi, Huawei/Hisi] indicate that Top-4 DL beam prediction accuracy can be more than 95%

-        evaluation results from [4 sources: HW/HiSi, CEWiT, Lenovo, ZTE] indicate that Top-5 DL beam prediction accuracy can be more than 95%

§  Average L1-RSRP difference of Top-1 predicted beam

§  Average predicted L1-RSRP difference of Top-1 beam

 

·       evaluation results from [7 sources: Futurewei, MediaTek, CEWiT, DoCoMo, LG, New H3C, ETRI] indicate that, AI/ML can achieve about 50% beam prediction accuracy

·       evaluation results from [4 sources: Apple, Qualcomm, Intel, vivo, CATT] indicate that, AI/ML can achieve about 60%~70% beam prediction accuracy

·       evaluation results from [5 sources: CMCC, Lenovo, ZTE, Fujitsu, OPPO] indicate that, AI/ML can achieve about 70%~80% beam prediction accuracy.

·       evaluation results from [4 sources: Nokia, Samsung, vivo, Spreadtrum] indicate that, AI/ML can achieve more than 80% beam prediction accuracy

·       Note: [1 source: vivo] reported that, AI/ML can achieve 89% beam prediction accuracy with the measurements from the best Rx beam based on the best Tx beam in Set A, and AI/ML can achieve 67.6% beam prediction accuracy with the measurements from the best Rx beam of on the best Tx beam in Set B.

·       Non-AI baseline Option 2 (exhaustive beam sweeping in Set B of beams) can achieve about 12.5% beam prediction accuracy 

·       evaluation results from [7 sources: Apple, Intel, vivo, Lenovo, Fujitsu, Ericsson, CATT] indicate that, AI/ML can achieve 70%-80% beam prediction accuracy

o   wherein [1 source: vivo] assumed the L1-RSRP of the Top-1 predicted beam is measured with the best Rx beam searched from the best Tx beam in set B.

·       evaluation results from [1 source: OPPO] indicate that, AI/ML can achieve 80%-90% beam prediction accuracy

·       evaluation results from [5 sources: Nokia, Qualcomm, Samsung, ZTE, Spreadtrum] indicate that, AI/ML can achieve more than 90% beam prediction accuracy

·       evaluation results from [6 sources: Futurewei, MediaTek, CEWiT, LG, New H3C, Apple] indicate that, AI/ML can achieve about 70%~ 80% beam prediction accuracy

·       evaluation results from [5 sources: CMCC, Intel, Qualcomm, vivo, Fujitsu, CATT] indicate that, AI/ML can achieve 80%~90% beam prediction accuracy

·       evaluation results from [4 sources: Nokia, OPPO, Samsung, Spreadtrum] indicate that, AI/ML can achieve 90% beam prediction accuracy for Top-2 DL Tx beam.

§  Average L1-RSRP difference of Top-1 predicted beam

·       evaluation results from [8 sources: Nokia, Qualcomm, OPPO, Samsung, CEWiT, ZTE, vivo] indicate that it can be below or about 1dB

·       evaluation results from [4 sources: Fujitsu, DoCoMo, Lenovo, CATT, Spreadtrum] indicate that it can be 1dB~2dB

·       evaluation results from [1 source: vivo] indicates that it can be 3.4dB with the assumption that the L1-RSRP of the Top-1 predicted beam is measured with the best Rx beam searched from the best Tx beam in set B

§  Average predicted L1-RSRP difference of Top-1 beam

·       evaluation results from [5 sources: vivo, Lenovo, OPPO, ZTE, Ericsson] indicates that it can be 0.8~1.5dB

·       Note that [4 sources: vivo, Lenovo, ZTE, Ericsson] assumed that all the L1-RSRPs of Set A of beams are used as the label in AI/ML training phase (e.g., regression AI/ML model) and [1 source: OPPO] assumed that only the L1-RSRP of the Top-1 beam in Set A is used as the label in training phase and the result is 0.82 dB.

§  UE average throughput

·       evaluation results from [1 source: Nokia] indicates that AI/ML achieves 98% of the UE average throughput of the BMCase1 baseline option 1 (exhaustive search over Set A beams).

·       evaluation results from [1 source: MediaTek] indicates that AI/ML achieves 85% of the UE average throughput of the BMCase1 baseline option 1 (exhaustive search over Set A beams).

§  UE 5%ile throughput

·       evaluation results from [1 source: Nokia] indicates that, AI/ML achieves 84% of the UE 5%ile throughput of the BMCase1 baseline option (exhaustive search over Set A beams).

·       evaluation results from [1 source: MediaTek] indicates that, AI/ML achieves 70% of the UE 5%ile throughput of the BMCase1 baseline option (exhaustive search over Set A beams).

 

Observation

Note: This is an update from the corresponding observation in RAN 1#113

§  evaluation results [from 3 sources: Nokia, Ericsson, Intel] indicate that, AI/ML can achieve more than 80% beam prediction accuracy [from 5 sources: Samsung, Huawei, MediaTek, Qualcomm, Intel] indicate that, AI/ML can achieve more than 55% beam prediction accuracy

·       [2 sources: Intel, Ericsson] reported more than 80% beam prediction accuracy with 100% outdoor UEs, and more than 60% beam prediction accuracy with 20% outdoor Ues.

·       Evaluation results from [1 source: Samsung] shows that, with limited measurements (e..g, 1 or 4) of narrow beams in Set A=32, AI/ML can increase 15% or 30% beam prediction accuracy [respectively] compared with 55% beam prediction accuracy with measurement of wide beams only.

§  evaluation results [from 4 sources: Nokia, Ericsson, Qualcomm, Intel] indicate that, AI/ML can achieve more than 85% beam prediction accuracy

§  evaluation results [from 3 sources: Huawei, Samsung, Intel] indicate that, AI/ML can achieve 57%~77% beam prediction accuracy

·       [One source: Intel] reported more than 86% beam prediction accuracy with 100% outdoor Ues, and more than 70% beam prediction accuracy with 20% outdoor Ues.

§  evaluation results [from 3 sources: Nokia, Ericsson, Intel] indicate that, AI/ML can achieve more than 95% beam prediction accuracy

§  evaluation results [from 3 sources: Huawei, Samsung, MediaTek] indicate that, AI/ML can achieve 85~94% beam prediction accuracy

·       evaluation results from [1 source: Qualcomm] indicate that Top-5 DL beam prediction accuracy can be more than 90%.

§  evaluation results from [4 sources: Nokia, Samsung, Qualcomm, Ericsson] indicate that, the average L1-RSRP difference can be less or about 1dB

§  evaluation results [from 1 source: Nokia] indicate that, AI/ML achieves 99% of the UE average throughput of the BMCase1 baseline option 1 (exhaustive search over Set A beams)

§  evaluation results [from 1 source: Nokia] indicate that, AI/ML achieves 94% of the of the BMCase1 baseline option 1(exhaustive search over Set A beams)

 

Observation

Note: This is an update from the corresponding observation in RAN 1#113

At least for BM-Case1 for inference of DL Tx beam with L1-RSRPs of all beams in Set B, existing quantization granularity of L1-RSRP (i.e., 1dB for the best beam, 2dB for the difference to the best beam) causes a minor loss in beam prediction accuracy compared to unquantized L1-RSRPs of beams in Set B:

·       Evaluation results from [13 sources: Interdigital, vivo, Huawei/HiSi, CATT, Fujitsu, Lenovo, Apple, Qualcomm, Samsung, DoCoMo, Ericsson, CEWiT, Nokia] show less than 5% beam prediction accuracy degradation in terms of Top-1 beam prediction accuracy.

o   Note: [1 source: Apple] uses the data without quantization for training and data with quantization for inference. Other sources use the same quantization scheme for data for training and inference.

Observation

Note: This is an update from the corresponding observation in RAN 1#113

At least for BM-Case1 for inference of DL Tx beam with L1-RSRPs of all beams in Set B,

 

Observation

Note: This is an update from the corresponding observation in RAN 1#113

At least for BM-Case1 when Set B is a subset of Set A, and for DL Tx beam prediction, with the measurements of the “best” Rx beam with exhaustive beam sweeping for each model input sample, AI/ML provides the better performance than with measurements of random Rx beam(s).

·       Evaluation results from [12 sources: vivo, Nokia, Fujitsu, Samsung Lenovo, Huawei/HiSi, Ericsson, MediaTek, CATT, Xiaomi, LG, ETRI] show 20%~50% degradation with random Rx beam(s) comparing with the “best” Rx beam in terms of Top-1 prediction accuracy.

·       Evaluation results from [1 source: CEWiT] show 12% degradation with measurement of random Rx compared with measurement of best Rx in term of Top-1 beam prediction accuracy.

Comparing performance with non-AI baseline option 2 (based on the measurement from Set B of beams), with measurements of random Rx beam(s) as AI/ML inputs:

·       Evaluation results from [7 sources: MediaTek, Fujitsu, vivo, Nokia, Samsung, Xiaomi, ETRI] show that AI/ML can still provide 7%~44% beam prediction accuracy gain in terms of Top-1 beam prediction accuracy.

Note: In both training and inference, measurements of random Rx beams are used as AI/ML inputs.

 

Observation

For BM-Case1 DL Tx-Rx beam pair prediction, when Set B is a subset of Set A, AI/ML can provide good beam prediction performance with less measurement/RS overhead comparing to using all measurements of Set A (which provides 100% beam prediction performance as non-AI baseline Option 1) without considering generalization aspects and without UE rotation.

·       (A)With measurements of fixed Set B of beam pairs that of 1/4 of Set A of beam pairs

o   Top-1 beam pair prediction accuracy:

§  evaluation results from [8 sources: DoCoMo, Samsung, Fujitsu, Xiaomi, CEWiT, Futurewei, LG, ETRI] indicate that, AI/ML can achieve about 50%~70% prediction accuracy

§  evaluation results from [4 sources: Xiaomi, Nokia, CATT, Interdigital] indicate that, AI/ML can achieve 70%~80% prediction accuracy

§  evaluation results from [5 sources: OPPO, ZTE, Lenovo, China Telecom, CMCC] indicate that, AI/ML can achieve about 80%~90% prediction accuracy

§  evaluation results from [1 source: Ericsson] indicate that, AI/ML can achieve more than 90% prediction accuracy

§  Note: in the above evaluation and the rest of other KPIs, most of the sources used measurements from all Rx beams of a certain set of Tx beams, except [3 sources: DoCoMo, Fujitsu, ETRI] who use measurements from half of Rx beams of a certain set of Tx beams.

·       The results from [3 sources: DoCoMo, Fujitsu, ETRI] indicate 60%~68% prediction accuracy in terms of Top-1 beam pair prediction accuracy.

·       [1 source: CATT] additionally reports that, AI/ML can achieve 76.46% and 56.12% beam prediction accuracy with the measurements from all Rx beams and half of Rx beams of a certain set of Tx beams respectively.

§  Non-AI baseline Option 2 (exhaustive beam sweeping in Set B of beam pairs) can achieve about 25% prediction accuracy.

o   Top-1 beam pair prediction accuracy with 1dB margin:

§  evaluation results from [5 sources: DoCoMo, Samsung, Xiaomi, Fujitsu, ETRI] indicate that, AI/ML can achieve more than 70% prediction accuracy

§  evaluation results from [2 source: Xiaomi, Interdigital] indicate that, AI/ML can achieve 80%~ about 90% prediction accuracy

§  evaluation results from [6 sources: Ericsson, Lenovo, CATT, Nokia, ZTE, China Telecom] indicate that, AI/ML can achieve more than 90% prediction accuracy.

§  Note: [1 source: CATT] reported that, AI/ML can achieve 91.6% and 74.57% beam prediction accuracy with 1dB margin with the measurements from all Rx beams of a certain set of Tx beams and with half of Rx beams of a certain set of Tx beams respectively.

o   Top-K(=2) beam pair prediction accuracy

§  evaluation results from [2 sources: Samsung, CEWiT] indicate that, AI/ML can achieve 65%- 75% prediction accuracy.

§  evaluation results from [6 sources: Fujitsu, Xiaomi, Futurewei, China Telecom, LG, ETRI] indicate that, AI/ML can achieve 80%- 90% prediction accuracy

§  evaluation results from [4 sources: CATT, OPPO, Nokia, CMCC] indicate that, AI/ML can achieve more than 90% prediction accuracy

§  Note: [1 source: CATT] reported that, AI/ML can achieve 91.34% and 78.06% Top-K(=2) beam prediction accuracy with the measurements from all Rx beams and half of Rx beams of a certain set of Tx beams respectively.

§  The beam prediction accuracy increases with K.  

·       evaluation results from [1 source: Lenovo] indicate that Top-3 beam pair prediction accuracy can be more than 95%

·       evaluation results from [4 sources: Nokia, xiaomi, Fujitsu, CMCC] indicate that Top-4 beam pair prediction accuracy can be [more than 95%

·       evaluation results from [2 source: ZTE, Interdigital] indicate that Top-5 beam pair prediction accuracy can be more than 95%

·       evaluation results from [1 source: ETRI] indicate that Top-10 beam pair prediction accuracy can be more than 95% for 32 Tx and 4 Rx with results from half Rx

o   Average L1-RSRP difference of Top-1 predicted beam pair

§  evaluation results from [13 sources: CATT, OPPO, ZTE, DoCoMo, Nokia, Lenovo, xiaomi, CEWiT, Futurewei. Fujitsu, China Telecom, ETRI, Keysight] indicate that it can be below or about 1dB

§  evaluation results from [1 source: samsung] indicates that it can be about 1.5dB

§  Note: [1 source: CATT] reported that it can be 0.716dB and 1.611dB with the measurements from all Rx beams and half of Rx beams of a certain set of Tx beams respectively.

o   Predicted L1-RSRP difference of Top-1 beam pair

§  [3 sources: ZTE, Lenovo, xiaomi] indicates that it can be below or about 1dB

§  Note that this is assumed that all the L1-RSRPs of Set A of beams are used as the label in AI/ML training phase (e.g., regression AI/ML model)

·       (B) With measurements of fixed Set B of beam pairs that of 1/8 of Set A of beam pairs

o   Top-1 beam pair prediction accuracy:

§  evaluation results from [4 sources: Futurewei, Lenovo, LG, ETRI] indicate that, AI/ML can achieve about 50% prediction accuracy

§  evaluation results from [4 sources: ZTE, OPPO, Intel, Fujitsu] indicate that, AI/ML can achieve about 60%~70% prediction accuracy

§  evaluation results from [6 sources: Nokia, CMCC, CAICT, China Telecom, vivo, BJTU] indicate that, AI/ML can achieve about 70%~80% prediction accuracy

§  Note: in the above evaluation and the rest of other KPIs, most of the sources used measurements from all Rx beams of a certain set of Tx beams, except [7 sources: OPPO, Fujitsu, Futurewei, BJTU, China Telecom, ETRI, CAICT] who use measurements from half of Rx beams of a certain set of Tx beams.

§  Non-AI baseline Option 2 (exhaustive beam sweeping in Set B of beam pairs) can achieve about 12.5% prediction accuracy 

o   Top-1 beam pair prediction with 1dB margin

§  evaluation results from [4 sources: Intel, Lenovo, Fujitsu, ETRI] indicate that, AI/ML can achieve 60%-70% prediction accuracy

§  evaluation results from [1 source: OPPO] indicate that, AI/ML can achieve 70%-80% prediction accuracy

§  evaluation results from [4 source: CAICT, Nokia, vivo, ZTE] indicate that, AI/ML can achieve 80%-90% prediction accuracy

o   Top-K(=2) beam pair prediction accuracy

§  evaluation results from [4 sources: Futurewei, OPPO, LG, ETRI] indicate that, AI/ML can achieve about 70%- 80% prediction accuracy.

§  evaluation results from [6 sources: Nokia, Huawei/HiSi, vivo, BJTU, Fujitsu, China Telecom] indicate that, AI/ML can achieve 80%- 90% prediction accuracy

§  evaluation results from [2 sources: CMCC, China Telecom] indicate that, AI/ML can achieve more than 90% prediction accuracy

§  The beam prediction accuracy increases with K.  

·       evaluation results from [1 source: CMCC] indicate that Top-3 beam pair prediction accuracy can be 96%

·       evaluation results from [1 source: China Telcom] indicate that Top-4 beam pair prediction accuracy can be 96%

·       evaluation results from [1 source: ZTE] indicate that Top-5 beam pair prediction accuracy can be 91%

·       evaluation results from [1 source: Nokia] indicate that Top-5 beam pair prediction accuracy can be 94%

o   Average L1-RSRP difference of Top-1 predicted beam pair

§  evaluation results from [5 sources: ZTE, CAICT, vivo, China Telecom, Nokia] indicate that it can be below or about 1dB

§  evaluation results from [5 sources: Futurewei, Fujitsu, OPPO, Lenovo, ETRI] indicate that it can be 1dB~2dB

o   Average predicted L1-RSRP difference of Top-1 beam pair

§  evaluation results from [2 sources: ZTE, vivo] indicates that it can be 0.7~1.3dB

§  Note that this is assumed that all the L1-RSRPs of Set A of beams are used as the label in AI/ML training phase (e.g., regression AI/ML model).

·       (C) With measurements of fixed Set B of beams that of 1/16 of Set A of beams

o   Top-1 beam pair prediction accuracy

§  evaluation results from [5 sources: Futurewei, CEWiT, BJTU, Lenovo, ETRI] indicate that, AI/ML can achieve less than 50% or about 50% prediction accuracy

§  evaluation results from [2 sources: CAICT, vivo] indicate that, AI/ML can achieve about 55%~57% prediction accuracy

§  evaluation results from [3 sources: Nokia, Intel, CMCC] indicate that, AI/ML can achieve about 60%~70% prediction accuracy

§  evaluation results from [1 source: HW/HiSi] indicate that, AI/ML can achieve about 70%~80% prediction accuracy

§  Note: in the above evaluation and the rest of other KPIs, some [6 sources: Futurewei, Huawei/HiSi, CMCC, Nokia, Intel, vivo] used measurements from all Rx beams of a certain set of Tx beams, and some other [6 sources: OPPO, Lenovo, CAICT, ETRI, CAICT, BJTU] use measurements from half or fourth of Rx beams of a certain set of Tx beams.

§  Non-AI baseline Option 2 (exhaustive beam sweeping in Set B of beam pairs) can achieve about 6.25% prediction accuracy

o   Top-1 beam pair prediction with 1dB margin

§  evaluation results from [4 sources: OPPO, Lenovo, ETRI] indicate that, AI/ML can achieve less than 50% or about 50% prediction accuracy

§  evaluation results from [1 source: Intel] indicate that, AI/ML can achieve more than 50%~60% prediction accuracy

§  evaluation results from [2 sources: CAICT, vivo, OPPO] indicate that, AI/ML can achieve about 60%-70% prediction accuracy

§  evaluation results from [2 sources: Nokia, Huawei/Hisi] indicate that, AI/ML can achieve 72%~85% prediction accuracy

o   Top-K(=2) beam pair prediction accuracy

§  evaluation results from [3 sources: Futurewei, Lenovo, ETRI] indicate that, AI/ML can achieve less than 60% prediction accuracy.

§  evaluation results from [5 sources: Nokia, CMCC, vivo, OPPO, BJTU] indicate that, AI/ML can achieve about 70%- 80% prediction accuracy

§  evaluation results from [1 source: Huawei/HiSi] indicate that, AI/ML can achieve more than 85% prediction accuracy

§  The beam prediction accuracy increases with K.

o   Average L1-RSRP difference of Top-1 predicted beam pair

§  evaluation results from [3 sources: Huawei/HiSi, Nokia, vivo] indicate that it can be 1dB~2dB

§  evaluation results from [2 sources:  CAICT, OPPO] indicate that it can be 2dB~3dB

§  evaluation results from [2 sources: Lenovo, Futurewei] indicate that it can be more than 3dB

§  evaluation results from [1 source: ETRI] indicate that it can be about 6dB

o   Predicted L1-RSRP difference of Top-1 beam pair

§  evaluation results from [2 sources: vivo, Lenovo] indicates that it can be about 2.5dB

§  Note that this is assumed that all the L1-RSRPs of Set A of beams are used as the label in AI/ML training phase (e.g., regression AI/ML model).

·       Note: in the above evaluations, [8 sources: CMCC, ETRI, Nokia, Lenovo, CATT, LG, OPPO, Huawei/HiSi, Intel] assumed 4 Rx, other sources assumed 8 Rx.

·       Note that ideal measurements are assumed

o   Beams could be measured regardless of their SNR.

o   No measurement error.

o   Measured in a single-time instance (within a channel-coherence time interval).

o   No quantization for the L1-RSRP measurements.

o   No constraint on UCI payload overhead for full report of the L1-RSRP measurements of Set B for NW-side models are assumed.

 

Observation

For BM-Case1 beam pair prediction, when Set B is different to Set A, with measurements of Set B of Tx wide beams that are 1/4 or 1/8 of Set A beams, evaluation results [from 1 source: Ericsson] indicate that AI/ML can provide good beam prediction performance with less measurement/RS overhead comparing to using all measurements of Set A (which provides 100% beam prediction performance as non-AI baseline Option 1) without considering generalization without UE rotation.

·       For Top-1 beam pair prediction accuracy, evaluation results [from 1 source: Ericsson] indicate that, AI/ML can achieve about 92.7%/92.5% beam prediction accuracy for 1/4 and 1/8 overhead respectively.

·       For Top-1 beam prediction accuracy with 1dB margin, evaluation results [from 1 source: Ericsson] indicate that, AI/ML can achieve about 97.6%/97.3% beam prediction accuracy for 1/4 and 1/8 overhead respectively.

Note that ideal measurements are assumed

·       Beams could be measured regardless of their SNR.

·       No measurement error.

·       Measured in a single-time instance (within a channel-coherence time interval).

·       No quantization for the L1-RSRP measurements.

·       No constraint on UCI payload overhead for full report of the L1-RSRP measurements of Set B for NW-side models are assumed.

Agreement

To calculate the measurement/RS overhead reduction and summarize results for BM-Case 2,

o   where T2 is the time duration for beam prediction

·       1- N*Mt/(M*(Mt+Pt)) if no sliding window

·       1-N/M if considering sliding window

Example for Case A

[1-XN/(YM)].

 

 

Example for Case B

 

o   In this case, prediction time is defined as the time from each measurement instance to the latest prediction instance before the next measurement instance.

1-N/(YM).

 

Example for Case B+

 

Observation

·       For BM-Case1 DL Tx beam prediction (unless otherwise stated), when Set B is a subset (1/4 unless otherwise stated) of Set A, without differentiating BB errors and RF errors modelled as truncated Gaussian distribution (unless otherwise stated),

§  evaluation results from [3 sources: Nokia, Ericsson, CATT] show that the beam prediction accuracy degrades 6%~10%in terms of Top-1 beam prediction accuracy comparing to the one without measurement error. And [1 source: Nokia] shows that 95%ile of L1-RSRP diff can be about 1.4~2dB, [1 source: CATT] shows that average L1-RSRP diff can be lower than 1dB.

§  evaluation results from [1 source: DoCoMo] show that

o   for DL Tx beam prediction, the beam prediction accuracy degrades 28.8% in terms of Top-1 beam prediction accuracy comparing to the one without measurement error, [and average L1-RSRP diff can be about 7.3dB.

o   for Tx-Rx beam pair prediction when Set B is 1/8 of Set A, the beam prediction accuracy degrades 2.4% in terms of Top-1 beam prediction accuracy comparing to the one without measurement error, and average L1-RSRP diff can be about 5.8dB

o   wherein the measurement error is modelled as uniformed distribution. 

§  evaluation results from [4 sources: Ericsson, Nokia, CEWiT, CATT] show that the beam prediction accuracy degrades 14% (with 3dB error) ~20% (with 4dB error) in terms of Top-1 beam prediction accuracy comparing to the one without measurement error. And [1 source: Nokia] shows that the 95%ile of L1-RSRP diff can be about 2~3.2dB. [1 source: CATT] shows that average L1-RSRP diff can be lower than 1dB.

§  evaluation results from [1 source: Google] show that the beam prediction accuracy degrades 13.6% in terms of Top-1 beam prediction accuracy comparing to the one without measurement error for DL Tx beam prediction.

§  evaluation results from [3 sources: Nokia, Ericsson, CATT] show that the beam prediction accuracy degrades 22%~30% in terms of Top-1 beam prediction accuracy comparing to the one without measurement error. And the 95%ile of L1-RSRP diff can be about 3.1~7.5dB.

o   evaluation results from [1 source: Ericsson] show that he L1-RSRP difference in 90%ile degrades 7dB for the AI/ML model, compared to baseline 1 and 2 that degrades 3 dB respectively 1 dB at the same percentile. 

§  evaluation results from [1 source: Samsung] show that for both DL Tx beam prediction and beam pair prediction, the beam prediction accuracy degrades 42~48% in terms of Top-1 beam prediction accuracy comparing to the one without measurement error. And the average L1-RSRP diff can be about 1.6dB.

o   However, comparing with the global search of all beams in Set A with the same measurement error level, for DL Tx beam prediction the beam prediction accuracy degrades less than 1% in terms of Top-1 beam prediction accuracy, and for Tx-Rx beam pair prediction the beam prediction accuracy degrades about 7% in terms of Top-1 beam prediction accuracy.

o   Note: in this evaluation, measurement errors are considered in training and inference phase only for AI inputs with idea labels in training phase.

§  evaluation results from [1 source: DoCoMo] show that

o   for DL Tx beam prediction, the beam prediction accuracy degrades 32.4% in terms of Top-1 beam prediction accuracy comparing to the one without measurement error, [and average L1-RSRP diff can be about 8.34dB.

o   for Tx-Rx beam pair prediction, the beam prediction accuracy degrades 5.2% in terms of Top-1 beam prediction accuracy comparing to the one without measurement error, [and average L1-RSRP diff can be about 6.4dB.

 

§  evaluation results from [1 source: Samsung] show that for DL Tx beam prediction and beam pair prediction with Set B is Ľ of Set A, the beam prediction accuracy degrades 42% and 38% respectively in terms of Top-1 beam prediction accuracy comparing to the one without measurement error. And the average of L1-RSRP diff is about [1.1dB and 2.16dB respectively.

o   However, comparing with the global search of all beams in Set A with the same measurement error level, for DL Tx beam prediction the beam prediction accuracy degrades about 2 % in terms of Top-1 beam prediction accuracy, and for Tx-Rx beam pair prediction the beam prediction accuracy degrades about 8% in terms of Top-1 beam prediction accuracy.

o   Note: in this evaluation, measurement errors are considered in training and inference phase only for AI inputs with idea labels in training phase.

§  evaluation results from [1 source: Huawei/HiSi] show that for both DL Tx beam prediction with Set B is 1/4 of Set A and beam pair prediction with Set B is 1/16 Set A, the beam prediction accuracy degrades 4.3% and 6.3% respectively in terms of Top-1 beam prediction accuracy comparing to the one without measurement error. And the average of L1-RSRP diff becomes 0.7dB and 2.18dB larger respectively.

o   Note: in this evaluation, for DL Tx beam prediction, the measurements of Set B from each Rx beam of all Rx beams were used as AI inputs to obtain Top-K beams, followed by Top-K beam sweeping with that given Rx beam. This procedure repeats over all Rx beams, to obtain the best Tx beam at all Rx beams. 

o   Top-1 beam prediction accuracy with 1 dB margin performance has slight performance degradation (less than 0.2%) than that without measurement error.

o   Top-1 beam prediction accuracy with 1 dB margin has 10% and 20% performance degradation than that without measurement error for Set B/Set A = 1/2 and 1/4 respectively.

 

Observation

For BM-Case 1 DL Tx beam prediction without UE rotation, for Top-1 beam prediction accuracy, compared to the best Rx beams obtained from one shot measurements (i.e., best of each Tx in Set B), with quasi-optimal Rx beam performance degradation is observed:

·       evaluation results from [1 source MediaTek] show 2% beam prediction accuracy degradation when Set B = 1/2 Set A and 7% beam prediction accuracy improvement when Set B = 1/4 or 1/8 Set A, when using the best Rx beams obtained from previous exhaustive sweeping (one shot in 20ms ago) of all beams in Set A, comparing with using the best Rx beam for each Tx beams in Set B obtained from current exhaustive sweeping, without considering UE rotation for 3km/h UE speed. Such beam prediction accuracy improvement may not exist when considering UE rotation and higher UE speed.

·       evaluation results from [1 source Samsung] show 2.5% beam prediction accuracy degradation using the best Rx of each Tx beams obtained from previous exhaustive sweeping (one shot in 20ms ago) than using the best Rx of each Tx beams obtained from current exhaustive sweeping, without considering UE rotation for 3km/h UE speed.

·       evaluation results from [1 source Qualcomm] shows 6.6%/6.9%/32.1%/45% degradation using a stochastic model in which the UE Rx beam is randomly selected with average probability that the best Rx beam is selected equal to 87.1%/75.1%/34.3%/10.9% compared to using the best Rx of each Tx beams obtained from current exhaustive sweeping, without considering UE rotation

·       evaluation results from [1 source: Samsung] show 13% beam prediction accuracy degradation, with the assumption of the best Rx beam for each Tx beam obtained from previous exhaustive sweeping over all beams in Set A in a SSB-like structure (in the past 160ms for each Rx beam with every 20ms a burst of Set A of beams) without considering UE rotation for 3km/h UE speed.

·       evaluation results from [1 source: vivo] show 3%~11% beam prediction accuracy degradation, with the assumption of the best Rx beam obtained from one specific Tx beam which is 1st Tx beam in Set B.

·       evaluation results from [1 source: Nokia] show 12% beam prediction accuracy degradation with the assumption of the best Rx beams obtained from one specific Rx beam which is the best between the same Rx beam for different panels.

·       In addition, evaluation results from [3 sources: HW/HiSi, Fujitsu, ZTE] show 1%~4% and 6%~12% beam prediction accuracy degradation, with the assumption of the best Rx beam is used for 90% and 80% of the model input samples and random Rx beam for the remaining samples respectively.

·       Even though, AI/ML can still provide better performance than non-AI baseline option 2 (exhaustive beam sweeping in Set B of beams), e..g, 50%~60% beam prediction accuracy difference in terms of Top-1 beam prediction accuracy based on the evaluation results from [2 sources: Samsung, MediaTek], where non-AI baseline option 1 (exhaustive beam sweeping in Set A of beams) provides 100% prediction accuracy.

For BM-Case 2 DL Tx beam prediction with UE rotation, for Top-1 beam prediction accuracy, with quasi-optimal Rx beam selection:

 

 

R1-2308320         Feature lead summary #2 evaluation of AI/ML for beam management        Moderator (Samsung)

R1-2308321         Feature lead summary #3 evaluation of AI/ML for beam management       Moderator (Samsung)

From Thursday session

Observation

Different label options may lead to different data collection overhead for training. At least for BMCase-1, for (Option 1a) Top-1 beam(pair) in Set A as the label and (Option 2a) all L1-RSRPs per beam of all the beams(pairs) in Set A as the label, with the comparable model complexity and computation complexity, the results across companies and the observed performance delta are summarized as below:

·       For Top 1 beam (pair) prediction accuracy,

o   evaluation results from [7 sources: MediaTek, OPPO, CMCC, Samsung, China Telecom, ZTE, Nokia] show that an AI/ML model with Top-1 beam(pair) in Set A as the label (Option 1a) can provide better performance (e,g, 2~7% or 12%~18% higher for Top 1 beam prediction accuracy) than an AI/ML model with all L1-RSRPs per beam of all the beams(pairs) in Set A as the label (Option 2a)

o   evaluation results from [1 source: vivo] show that similar or slightly worse (e,g, 2% higher for Top 1 beam prediction accuracy)) can be achieved with Option 1a than Option 2a

·       For Top-K beam (pair) prediction accuracy or Top-1 beam prediction accuracy with 1dB margin,

o   evaluation results from [ 2 sources: OPPO, Nokia] show that Option 1a can provide similar performance than Option 2a

o   evaluation results from [ 1 source: Samsung] show that Option 2a can provide 5%~12% better performance than Option 1a for Top-2/-4 beam pair prediction accuracy.

o   evaluation results from [1 source: vivo] show that show that Option 1a can provide 2%~5% better performance than Option 2a for Top-2/-6 beam pair prediction accuracy.

o   evaluation results from [1 source: ZTE] show that show that Option 1a can provide 2%~7% /1%~5% better performance than Option 2a for Top-2/-4 beam prediction accuracy for DL Tx beam prediction.

o   evaluation results from [1 source: MediaTek] show that show that Option 1a can provide <1% or 9%~17% better performance than Option 2a for Top-2/-3 beam prediction accuracy for DL Tx beam prediction for Set B=1/2 Set A or Set B =1/4 or 1/8 Set A.

·       Detailed assumptions and results are listed as below:

o   evaluation results from [one source: OPPO] show that for both DL Tx beam prediction and beam pair prediction with Set B is Ľ of Set A, with Top-1 beam in Set A as the label, AI/ML can provide 2%~3% higher beam prediction accuracy in terms of Top-1 beam prediction accuracy comparing to the one with all L1-RSRPs per beam of all the beams as the label with comparable model complexity. The Top-K beam prediction accuracy is comparable for DL Tx beam prediction; however, the Top-K beam prediction accuracy is slightly better (<1%) with all L1-RSRPs as the label. The average L1-RSRP difference is similar (about 1.5dB) in the two cases.

o   evaluation results from [one source: Nokia] show that for Tx beam prediction with Set B is 1/2 Set A and Set B is 1/4 Set A, with Top-1 beam in Set A as the label, AI/ML can provide 2%-5% higher beam prediction accuracy in terms of Top-1 beam prediction accuracy comparing to the one with all L1-RSRPs per beam of all the beams as the label with comparable model complexity. The Top- 1 beam with 1dB error and Top-K beam prediction accuracy is comparable for DL Tx beam prediction.

o   evaluation results from [one source: CMCC] show that for beam pair prediction with Set B is 1/8 or 1/16of Set A, with Top-1 beam in Set A as the label, AI/ML can provide 4%-6% higher beam prediction accuracy in terms of Top-1 beam prediction accuracy comparing to the one with all L1-RSRPs per beam of all the beams as the label even with larger model complexity.

o   evaluation results from [one source: Samsung] show that for beam pair prediction with Set B is Ľ Set A, with Top-1 beam in Set A as the label, AI/ML can provide 12% higher beam prediction accuracy in terms of Top-1 beam prediction accuracy comparing to the one with all L1-RSRPs of all the beams as the label with comparable model complexity. However, labeling with all L1-RSRPs can provide 5% and 12 % better for Top-3 or Top-4 beam prediction accuracy comparing with labeling with Top-1 beam ID.

o   evaluation results from [one source: China Telecom] show that for beam pair prediction with Set B is Ľ Set A, with Top-1 beam in Set A as the label, AI/ML can provide 15% higher beam prediction accuracy in terms of Top-1 beam prediction accuracy comparing to the one with all L1-RSRPs per beam of all the beams as the label with comparable model complexity. The average L1-RSRP difference is similar (about 0.4dB) in the two cases.

o   evaluation results from [one source: vivo] show that for DL Tx beam prediction with Set B is Ľ of Set A, with Top-1 beam in Set A as the label, AI/ML can provide similar beam prediction accuracy in terms of Top-1 beam prediction accuracy comparing to the one with all L1-RSRPs per beam of all the beams as the label. Using Top-1 beam as the label can provide 2%/5% better performance for Top-2/-6 beam prediction. The average L1-RSRP difference is similar (about 1dB) in the two cases.

o   evaluation results from [one source: vivo] show that for beam pair prediction with Set B is 1/16 of Set A, with Top-1 beam in Set A as the label, 2% beam prediction accuracy degradation in terms of Top-1 beam prediction accuracy is achieved comparing to the one with all L1-RSRPs per beam of all the beams as the label.

o   evaluation results from [one source: ZTE] show that for Tx beam prediction with Set B is 1/4 of Set A or 1/8 of Set A or 1/16 of Set A, with Top-1 beam in Set A as the label, AI/ML can provide comparable or up to 7% higher beam prediction accuracy in terms of Top-K (K=1, 2, 4) beam prediction accuracy comparing to the one with all L1-RSRPs per beam of all the beams as the label with comparable model complexity. However, the performance of average L1-RSRP difference of Top-1 predicted beam and beam prediction accuracy with 1dB margin for Top-1 beam is comparable or better with all L1-RSRPs per beam of all the beams as the label.

o   Evaluation results from [one source: MediaTek] show that for Tx beam prediction with Set B is 1/2 Set A, with Top-1 beam in Set A as the label, AI/ML can provide <1% higher beam prediction accuracy in terms of Top-K (K=1,2,3) beam prediction accuracy comparing to the one with all L1-RSRPs per beam of all the beams as the label with comparable model complexity. With Set B is 1/4 Set A and 1/8 Set A and Top-1 beam in Set A as the label, AI/ML can provide 10-18% higher beam prediction accuracy in terms of Top-K (K=1,2,3) beam prediction accuracy comparing to the one with all L1-RSRPs per beam of all the beams as the label with comparable model complexity.

In addition, [1 source: OPPO] show good performance with Top-K beam(pair)s in Set A and the corresponding L1-RSRPs as the label (Option 2b) can be achieved with two separate AI models. In the evaluation, one classification model (with Top-1/K beam(s) in Set A as the label(s)) is used to predict the Top-1/K beam and another regression model (with L1-RSRP(s) of Top-1/K beam(s) in Set A as the label(s)) is used to predict L1-RSRP(s).

Note: The performance for beam predication accuracy with AI/ML may also depend on some other aspects, e.g., AI/ML model architecture choice, model training parameters (e.g., hyperparameter tuning), loss function corresponding to optimizing certain KPI(s). Assumptions on loss function are not indicated in the evaluations above.

Note: ideal measurements are assumed

·       Beams could be measured regardless of their SNR.

·       No measurement error.

·       Measured in a single-time instance (within a channel-coherence time interval).

·       No quantization for the L1-RSRP measurements.

·       No constraint on UCI payload overhead for full report of the L1-RSRP measurements of Set B for NW-side models are assumed. 

 

Observation

At least for BM-Case1 (unless otherwise stated) DL Tx beam with the measurements from the best Rx beam, and/or beam pair prediction, when Set B is a subset of Set A without considering other generalization aspects and without UE rotation.

 

Observation 4.1.3A in R1-2308321 is confirmed

 

Observation

At least for BMCase-1, AI/ML (without considering model switching) has some performance degradation with some unseen scenarios including:

 

However, the AI/ML (without considering model switching) has significant performance degradation with some other unseen scenarios, including:

In order to let AI/ML model see the data from a new setting which causes performance loss, the AI/ML model can be trained with mixed data or finetuned with the data from the new setting to improve the generalization performance. Alternatively, AI/ML model can be trained for different scenarios and rely on model switching based on applicable scenario which would improve generalization performance.

 

Observation

For BMCase-2, for variable UE mobility, the collected data for training can be mixed and the generalization performance with mixed UE speeds is acceptable.

 

 

R1-2308585         Feature lead summary #4 evaluation of AI/ML for beam management       Moderator (Samsung)

From Friday session

Observation

Different location of AI/ML model (e.g., NW side model, or UE side model) may have different generalization requirements: 

For NW side model,

For UE side model,

 

Agreement

Observation 4.1.4 in R1-2308585 is confirmed.

 

Agreement

Observation 6.1 in R1-2308585 is confirmed.

Note: this is an update of corresponding observation made in previous meeting.

 

R1-2308680         Evaluation results for AI/ML in BM              Moderator (Samsung)

9.2.3.2       Other aspects on AI/ML for beam management

Including potential specification impact.

 

R1-2306399         Discussion on other aspects of AI/ML beam management              New H3C Technologies Co., Ltd.

R1-2306434         Discussion on other aspects of AI/ML for beam management              FUTUREWEI

R1-2306478         AI and ML for beam management  NVIDIA

R1-2306514         Discussion on AI/ML for beam management Huawei, HiSilicon

R1-2306640         Other aspects on AI/ML for beam management          Spreadtrum Communications

R1-2306690         Discussion for other aspects on AI/ML for beam management              InterDigital, Inc.

R1-2306743         Other aspects on AI/ML for beam management          vivo

R1-2306798         Discussion on other aspects for AI beam management              ZTE

R1-2306857         Other Aspects on AI/ML for Beam Management        Intel Corporation

R1-2306904         Considerations on AI/ML for beam management        Sony

R1-2306929         Discussion on AI/ML for beam management Ericsson

R1-2306960         On Enhancement of AI/ML based Beam Management              Google

R1-2307016         Other aspects on AI/ML for beam management          LG Electronics

R1-2307079         Discussion on other aspects for AI/ML beam management              CATT

R1-2307137         Discussion on AI ML for beam management NEC

R1-2307157         Discussion for specification impacts on AI/ML for beam management        Fujitsu

R1-2307186         Discussion on other aspects on AI/ML for beam management              CMCC

R1-2307233         Discussion on AI/ML for beam management Panasonic

R1-2307241         Other aspects on ML for beam management Nokia, Nokia Shanghai Bell

R1-2308261         Discussion on other aspects of AI/ML based beam management enhancements      Apple     (rev of R1-2307271)

R1-2307378         Potential specification impact on AI/ML for beam management              xiaomi

R1-2307469         Discussion on other aspects on AI/ML for beam management              NTT DOCOMO, INC.

R1-2307567         Other aspects of AI/ML for beam management           OPPO

R1-2307671         Discussion on potential specification impact for beam management        Samsung

R1-2307730         Prediction of untransmitted beams in a UE-side AI-ML model              Rakuten Symphony

R1-2307742         Discussion on other aspects on AI/ML for beam management              ETRI

R1-2307809         Further aspects of AI/ML for beam management        Lenovo

R1-2307863         Discussions on AI-ML for Beam management            CAICT

R1-2307867         Discussion on other aspects on AI/ML for beam management  KT Corp.

R1-2307919         Other aspects on AI/ML for beam management          Qualcomm Incorporated

R1-2308055         Other aspects on AI/ML for beam management          MediaTek Inc.

R1-2308160         Discussion on other aspects of AI/ML for beam management              Indian Institute of Technology Madras (IITM), IIT Kanpur, CEWiT

 

R1-2308313         Summary#1 for other aspects on AI/ML for beam management              Moderator (OPPO)

R1-2308314         Summary#2 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Wednesday session

Conclusion

Regarding data collection for NW-side AI/ML model of BM-Case1 and BM-Case2, the following approaches have been identified by companies for overhead reduction

 

Observation

At least for BM-Case1 with a UE-side AI/ML model, for AI model inference, the legacy TCI state mechanism can be used to perform beam indication of beams

 

Observation

Regarding data collection for NW-side AI/ML model of BM-Case1 and BM-Case2, the following reporting signaling for beam-specific aspects maybe applicable:

 

 

R1-2308315         Summary#3 for other aspects on AI/ML for beam management              Moderator (OPPO)

R1-2308316         Summary#4 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Thursday session

Observation

Regarding the performance metric(s) of AI/ML model monitoring for BM-Case1 and BM-Case2, the following table is identified

Alt.1: Beam prediction accuracy related KPIs, e.g., Top-K/1 beam prediction accuracy

Alt.2: Link quality related KPIs, e.g., throughput, L1-RSRP, L1-SINR, hypothetical BLER

 

Alt.3: Performance metric based on input/output data distribution of AI/ML

Alt.4: The L1-RSRP difference evaluated by comparing measured RSRP and predicted RSRP

Applicable to all studied AI models

Applicable to all studied AI models

Applicable to all studied AI models

May not applicable to some implementation of AI model (e.g., not output of predicted L1-RSRP)

Reflect the prediction accuracy of AI model

 

Reflect the system/link performance

Reflect the change of the statics of the input/output data

Reflect accuracy of the predicted 1-RSRP

Not reflect the system/link performance directly

Not reflect the prediction accuracy of AI model directly

Not reflect the prediction performance of AI model directly

 

Not reflect the system/link performance directly

Not reflect the system/link performance directly

 

Note1: The above analysis shall not give an indication about whether/which metric is supported or specified 

Note2: Monitoring performance of the above alternatives are not touched in the table

 

 

R1-2308317         Summary#5 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Friday session

Observation

For BM-Case1 and BM-Case2 with a UE-side AI/ML model, consistency / association of Set B beams and Set A beams across training and inference is beneficial from performance perspective.

·         Note: Whether specification impact is needed is a separate discussion.

9.2.4       AI/ML for positioning accuracy enhancement

9.2.4.1       Evaluation on AI/ML for positioning accuracy enhancement

Including evaluation methodology, KPI, and performance evaluation results.

 

R1-2306454         Evaluation of AI/ML for Positioning Accuracy Enhancement              Ericsson

R1-2306479         Evaluation of AI and ML for positioning enhancement              NVIDIA

R1-2306515         Evaluation on AI/ML for positioning accuracy enhancement              Huawei, HiSilicon

R1-2308337         Evaluation on AI/ML for positioning accuracy enhancement              vivo       (rev of R1-2306744)

R1-2306799         Evaluation on AI positioning enhancement   ZTE

R1-2308330         Evaluation on AI/ML for positioning accuracy enhancement              China Telecom    (rev of R1-2306811)

R1-2306961         On Evaluation of AI/ML based Positioning  Google

R1-2308205         Evaluation and discussion on AI/ML for positioning accuracy enhancement       CATT    (rev of R1-2307080)

R1-2307187         Discussion on evaluation on AI/ML for positioning accuracy enhancement       CMCC

R1-2307235         Evaluating the impact of AI/ML on positioning accuracy enhancements      Fraunhofer IIS, Fraunhofer HHI

R1-2307242         Evaluation of ML for positioning accuracy enhancement              Nokia, Nokia Shanghai Bell

R1-2308248         Evaluation on AI/ML for positioning accuracy enhancement              Apple     (rev of R1-2307272)

R1-2307379         Evaluation on AI/ML for positioning accuracy enhancement              xiaomi

R1-2307568         Evaluation methodology and results on AI/ML for positioning accuracy enhancement       OPPO

R1-2307582         Evaluation on AI/ML for positioning accuracy enhancement              InterDigital, Inc.

R1-2307672         Evaluation on AI/ML for Positioning            Samsung

R1-2307810         Discussion on AI/ML Positioning Evaluations            Lenovo

R1-2307920         Evaluation on AI/ML for positioning accuracy enhancement              Qualcomm Incorporated

R1-2308056         Evaluation of AIML for Positioning Accuracy Enhancement              MediaTek Inc.

R1-2308161         Evaluation of AI/ML for Positioning Accuracy Enhancement              Indian Institute of Technology Madras (IITM), IIT Kanpur, CEWiT

 

R1-2308355         Summary #1 of Evaluation on AI/ML for positioning accuracy enhancement       Moderator (Ericsson)

R1-2308356         Summary #2 of Evaluation on AI/ML for positioning accuracy enhancement      Moderator (Ericsson)

From Tuesday session

Agreement

Update the RAN1#113 agreement so that the same understanding applies to both Approach 1-A and 2-A:

Agreement

For the evaluation of AI/ML based positioning, the study of model input due to different number of TRPs include the following approaches. Proponent of each approach provide analysis for model performance, signalling overhead (including training data collection and model inference), model complexity and computational complexity.

·       Approach 1: Model input size stays constant as NTRP=18. The number of TRPs (N’TRP) that provide measurements to model input varies. When N’TRP < NTRP, the remaining (NTRP - N’TRP) TRPs do not provide measurements to model input, i.e., measurement value is set such that the (NTRP - N’TRP) TRPs do not affect model output.

o   Approach 1-A. The set of TRPs (N’TRP) that provide measurements is fixed.

o   Approach 1-B. The set of TRPs (N’TRP) that provide measurements can change dynamically.

o   Note: for Approach 1, one model is provided to cover the entire evaluation area.

·       Approach 2: The TRP dimension of model input is equal to the number of TRPs (N’TRP) that provide measurements as model input. When N’TRP < NTRP, the remaining (NTRP - N’TRP) TRPs are ignored by the given model.

o   Approach 2-A. The set of active TRPs (N’TRP) that provide measurements is fixed.

§  For both Approach 1-A and 2-A: one model can be provided to cover the entire evaluation area, which is equivalent to deploying N’TRP TRPs in the evaluation area for positioning if ignoring the potential inference from the remaining (18 - N’TRP) TRPs.

o   Approach 2-B. The set of active TRPs (N’TRP) that provide measurements can change dynamically.

§  For Approach 2-B, one model is developed to handle various patterns of active TRPs.

o   For Approach 2, if Nmodel (Nmodel >1) models are provided to cover the entire evaluation area, the total complexity (model complexity is the summation of the Nmodel models.

 

Conclusion

For AI/ML based positioning, capture the sampling period used in companies' evaluations in TR 38.843 as follows:

 

Observation

For direct AI/ML positioning and different drops, evaluation has been performed where the AI/ML model is (a) previously trained for drop A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for drop B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under drop B and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for drop B.

 

Observation

For direct AI/ML positioning and different drops, evaluation has been performed where the AI/ML model is (a) previously trained for drop A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for drop B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under drop A and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for drop A.

 

Observation

For direct AI/ML positioning and different clutter parameters, evaluation has been performed where the AI/ML model is (a) previously trained for clutter parameter A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for clutter parameter B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under clutter parameter B and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for clutter parameter B.

 

Observation

For direct AI/ML positioning and different clutter parameters, evaluation has been performed where the AI/ML model is (a) previously trained for clutter parameter A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for clutter parameter B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under clutter parameter A and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for clutter parameter A.

 

Observation

For direct AI/ML positioning and different network synchronization error, evaluation has been performed where the AI/ML model is (a) previously trained for network synchronization error = A (ns) with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for network synchronization error = B (ns) with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under network synchronization error = B (ns) and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for network synchronization error = B (ns).

 

Observation

For direct AI/ML positioning and different network synchronization error, evaluation has been performed where the AI/ML model is (a) previously trained for network synchronization error = 0 ns with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for network synchronization error = 50 ns with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under network synchronization error = 0 ns and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for network synchronization error = 0 ns.

 

Observation

For direct AI/ML positioning and different UE timing error, evaluation has been performed where the AI/ML model is (a) previously trained without UE timing error with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning with UE timing error with a dataset of sample density x% ´ N (#samples/m2), (c) then tested with UE timing error and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for the case with UE timing error.

 

Observation

For direct AI/ML positioning and different InF scenarios, evaluation has been performed where the AI/ML model is (a) previously trained for InF scenario A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for InF scenario B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under InF scenario B and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for InF scenario B.

 

Observation

For direct AI/ML positioning and different InF scenarios, evaluation has been performed where the AI/ML model is (a) previously trained for InF scenario A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for InF scenario B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under InF scenario A and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for InF scenario A.

 

Observation

For direct AI/ML positioning and different SNR value (dB), evaluation has been performed where the AI/ML model is (a) previously trained for SNR value A (dB) with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for SNR value B (dB) with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under SNR value B (dB) and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for SNR value B (dB).

 

Observation

For direct AI/ML positioning and different time varying assumptions, evaluation has been performed where the AI/ML model is (a) previously trained for the scenario without time varying change with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for the scenario with time varying change with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under the scenario with time varying change and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for the scenario with time varying change.

 

Observation

For direct AI/ML positioning and different channel estimation error, evaluation has been performed where the AI/ML model is (a) previously trained for channel estimation error = 20 dB with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for channel estimation error = 0 dB with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under channel estimation error = 0 dB and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for channel estimation error = 0 dB.

 

Observation

For direct AI/ML positioning and different channel estimation error, evaluation has been performed where the AI/ML model is (a) previously trained for channel estimation error = 20 dB with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for channel estimation error = 0 dB with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under channel estimation error = 20 dB and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for channel estimation error = 20 dB.

 

 

Observation

For AI/ML assisted positioning with timing information as model output and for different drops, evaluation has been performed where the AI/ML model is (a) previously trained for drop A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for drop B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under drop B and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for drop B.

 

Observation

For AI/ML assisted positioning with timing information as model output and for different clutter parameters, evaluation has been performed where the AI/ML model is (a) previously trained for clutter parameter A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for clutter parameter B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under clutter parameter B and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for clutter parameter B.

 

Observation

For AI/ML assisted positioning with timing information as model output and for different clutter parameters, evaluation has been performed where the AI/ML model is (a) previously trained for clutter parameter A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for clutter parameter B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under clutter parameter A and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for the clutter parameter A.

 

Observation

For AI/ML assisted positioning and different network synchronization error, evaluation has been performed where the AI/ML model is (a) previously trained for network synchronization error A (ns) with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for network synchronization error B (ns) with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under network synchronization error B (ns) and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for network synchronization error B (ns).

 

Observation

For AI/ML assisted positioning and different network synchronization error,

 

Observation

For AI/ML assisted positioning and different InF scenarios, evaluation has been performed where the AI/ML model is (a) previously trained for InF scenario A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for InF scenario B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under InF scenario B and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for InF scenario B.

 

Observation

For AI/ML assisted positioning and different InF scenarios, evaluation has been performed where the AI/ML model is (a) previously trained for InF-DH{60%,6m,2m} with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for InF-SH{20%,2m,10m} with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under InF-DH{60%,6m,2m} and the horizontal accuracy at CDF=90% is E meters. Evaluation results show that,

Here  (meters) is the full training accuracy at CDF=90% for InF-DH{60%,6m,2m}.

 

Observation

For direct AI/ML positioning, evaluation results show that:

 

Observation

For direct AI/ML positioning,

 

 

R1-2308357         Summary #3 of Evaluation on AI/ML for positioning accuracy enhancement      Moderator (Ericsson)

From Wednesday session

Agreement

For evaluation of AI/ML based positioning, when time domain samples are used as model input and sub-sampling is applied, the selection of N't measurements is based on the strongest power, unless explicitly stated otherwise. When sub-sampling is applied the N't measurement are not necessarily consecutive in time.

 

Agreement

For evaluation of AI/ML based positioning, when timing information is included in model input (e.g., in CIR/PDP/DP), training dataset and test dataset use the same timing format (i.e., both are absolute time, or both are relative time) unless explicitly stated otherwise.

 

Observation

For evaluation of AI/ML based positioning with multipath measurement for model input,

 

Observation

For AI/ML assisted positioning with LOS/NLOS indicator as model output and for different clutter parameters, evaluation has been performed where the AI/ML model is (a) previously trained for clutter parameter A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for clutter parameter B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under clutter parameter B and the LOS/NLOS indication accuracy is E (using F1-score). Evaluation results show that,

Here  is the full training accuracy (using F1-score) for the clutter parameter B.

 

Observation

For AI/ML assisted positioning with LOS/NLOS indicator as model output and for different clutter parameters, evaluation has been performed where the AI/ML model is (a) previously trained for clutter parameter A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for clutter parameter B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under clutter parameter A and the LOS/NLOS indication accuracy is E (using F1-score). Evaluation results show that,

Here  is the full training accuracy (using F1-score) for the clutter parameter A.

 

Observation

Based on evaluation results from [3 sources: Ericsson, MediaTek, Nokia], for AI/ML assisted positioning where the model output includes the LOS/NLOS indicator, when the model is trained with dataset containing random LOS/NLOS label error, the models have no or minor degradation for LOS/NLOS identification accuracy up to at least m%=20% and at least n%=20%. When the training dataset has up to m%=20% and n%=20%, evaluation results show that the LOS/NLOS identification accuracy is PlablErr = PnoLablErr – d (percentage), where d is in the range of (-1.2%~3.1%).

n%=FP/NNLOS is the false positive rate of the training data label, FP (False Positive) is the number of actual NLOS links which are incorrectly labelled as LOS, and NNLOS is the total number of actual NLOS links.

 

 

R1-2308358         Summary #4 of Evaluation on AI/ML for positioning accuracy enhancement      Moderator (Ericsson)

From Thursday session

Observation

Based on evaluation results by [8 sources: vivo, xiaomi, Ericsson, MediaTek, Qualcomm, CATT, Nokia, InterDigital], for TRP reduction of direct AI/ML positioning, approaches supporting dynamic TRP pattern can achieve the horizontal positioning accuracy Edynamic = (0.80~2.15) ´ Efixed (meters), when other design parameters are held the same, where:

 

Observation

Based on evaluation results by [8 sources: vivo, xiaomi, Ericsson, MediaTek, Qualcomm, CATT, Nokia, InterDigital], for TRP reduction of direct AI/ML positioning, Approach 1-A and 2-A achieve similar performance. The horizontal positioning accuracy E2A = (0.87~1.32) ´ E1A (meters), when other design parameters are held the same, where:

Friday: Note: Add IIT Madras as one of the sources

 

Observation

Based on evaluation results by [11 sources: Ericsson, vivo, xiaomi, MediaTek, Qualcomm, China Telecom, OPPO, CMCC, CATT, Huawei, InterDigital], for TRP reduction of direct AI/ML positioning, the positioning accuracy degrades as the number of active TRPs are reduced from 18 TRPs to 3 TRPs. The degradation increases as the number of active TRPs decreases.

Here E (meters) is the horizontal positioning accuracy at CDF=90% with N'TP active TRPs, E18TRP (meters) is the horizontal positioning accuracy at CDF=90% with NTP =18 active TRPs.

Note: some results from [2 sources: xiaomin, CATT] show E > 11 ´ E18TRP for N'TP=9 and 6 when using Approach 2-B.

Friday: Note: Add IIT Madras as one of the sources

 

Observation

Based on evaluation results by [2 sources: Ericsson, CATT], for TRP reduction of AI/ML assisted positioning with multi-TRP construction, approaches supporting dynamic TRP pattern can achieve the horizontal positioning accuracy Edynamic = (1.03~1.74) ´ Efixed (meters), when other design parameters are held the same, where:

Note: evaluation results of [1 source: MediaTek] show Edynamic = (5.66~8.12) ´ Efixed when the number of active TRP is reduced from NTP =18 to N'TP =9 or 4.

 

Observation

Based on evaluation results by [2 sources: Ericsson, CATT], for TRP reduction of AI/ML assisted positioning, Approach 1-A and 2-A achieve similar performance. The horizontal positioning accuracy E2A = (1~1.47) ´ E1A (meters), when other design parameters are held the same, where:

·       E1A (meters) is the horizontal positioning accuracy at CDF=90% for Approach 1-A;

·       E2A (meters) is the horizontal positioning accuracy at CDF=90% for Approach 2-A;

 

Observation

Based on evaluation results by [4 sources: Ericsson, CATT, vivo, MediaTek], for TRP reduction of AI/ML assisted positioning, the positioning accuracy degrades as the number of active TRPs are reduced from 18 TRPs to 3 TRPs. The degradation increases as the number of active TRPs decreases.

Here E (meters) is the horizontal positioning accuracy at CDF=90% with N'TP active TRPs, E18TRP (meters) is the horizontal positioning accuracy at CDF=90% with NTP =18 active TRPs.

Note: some results from [1 source: MediaTek] show E > 7.54 ´ E18TRP for N'TP=9 and E > 42.76 ´ E18TRP for N'TP=6 when using Approach 1-B/2-B.

 

Observation (Updated Observation made in RAN1#112bis)

For direct AI/ML positioning, based on evaluation results of network synchronization error in the range of 0-50 ns, when the model is trained by a dataset with network synchronization error t1 (ns) and tested in a deployment scenario with network synchronization error t2 (ns), for a given t1,

·       For a case evaluated by a given source, the positioning accuracy of cases with t2 smaller than t1 is better than the cases with t2 equal to t1. For example,

o   For the case of (t1, t2)=(50ns, 10ns), evaluation results submitted to RAN1#112bis show the positioning error of (t1, t2)=(50ns, 10ns) is 0.52~0.83 times that of (t1, t2)=(50ns, 50ns).

o   For the case of (t1, t2)=(50ns, 0ns), evaluation results submitted to RAN1#112bis show the positioning error of (t1, t2)=(50ns, 0ns) is 0.50~0.82 times that of (t1, t2)=(50ns, 50ns).

·       For a case evaluated by a given source, the positioning accuracy of cases with t2 greater than t1 is worse than the cases with t2 equal to t1. The larger the difference between t1 and t2, the more the degradation. For example,

o   For the case of (t1, t2)=(0ns, 10ns), evaluation results submitted to RAN1#112bis show the positioning error of (0ns, 10ns) is 1.17~9.5 times that of (0ns, 0ns).

o   For the case of (t1, t2)=(0ns, 50ns), evaluation results submitted to RAN1#112bis show the positioning error of (0ns, 50ns) is 10~40 times that of (0ns, 0ns).

Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.

 

Observation (Updated Observation made in RAN1#113)

For AI/ML assisted positioning with timing information (e.g., ToA) as model output, based on evaluation results of network synchronization error in the range of 0-50 ns, when the model is trained by a dataset with network synchronization error t1 (ns) and tested in a deployment scenario with network synchronization error t2 (ns), for a given t1,

·       For a case evaluated by a given source, the positioning accuracy of cases with t2 smaller than t1 is better than the cases with t2 equal to t1. For example,

o   For the case of (t1, t2)=(50ns, 20~25ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(50ns, 20~25ns) is 0.64~0.85 times that of (t1, t2)=(50ns, 50ns).

o   For the case of (t1, t2)=(50ns, 0ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(50ns, 0ns) is 0.50~0.80 times that of (t1, t2)=(50ns, 50ns).

·       For a case evaluated by a given source, the positioning accuracy of cases with t2 greater than t1 is worse than the cases with t2 equal to t1. The larger the difference between t1 and t2, the more the degradation. For example,

o   For the case of (t1, t2)=(0ns, 10ns), evaluation results submitted to RAN1#113 show the positioning error of (0ns, 10ns) is 1.16~4.40 times that of (0ns, 0ns).

o   For the case of (t1, t2)=(0ns, 20~25ns), evaluation results submitted to RAN1#113 show the positioning error of (0ns, 50ns) is 2.19~10.11 times that of (0ns, 0ns).

o   For the case of (t1, t2)=(0ns, 50ns), evaluation results submitted to RAN1#113 show the positioning error of (0ns, 50ns) is 9.68~31.95 times that of (0ns, 0ns).

Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.

 

Observation (Updated Observation made in RAN1#113)

For AI/ML assisted positioning with timing information (e.g., ToA) as model output, based on evaluation results of timing error in the range of 0-50 ns, when the model is trained by a dataset with UE/gNB RX and TX timing error t1 (ns) and tested in a deployment scenario with UE/gNB RX and TX timing error t2 (ns), for a given t1,

·       For a case evaluated by a given source, the positioning accuracy of cases with t2 smaller than t1 is better than the cases with t2 equal to t1. For example,

o   For the case of (t1, t2)=(50ns, 20~25ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(50ns, 20~25ns) is 0.75~1.00 times that of (t1, t2)=(50ns, 50ns).

o   For the case of (t1, t2)=(50ns, 0ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(50ns, 0ns) is 0.76~0.99 times that of (t1, t2)=(50ns, 50ns).

·       For a case evaluated by a given source, the positioning accuracy of cases with t2 greater than t1 is worse than the cases with t2 equal to t1. The larger the difference between t1 and t2, the more the degradation. For example,

o   For the case of (t1, t2)=(0ns, 10ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(0ns, 10ns) is 1.34~5.43 times that of (t1, t2)=(0ns, 0ns).

o   For the case of (t1, t2)=(0ns, 20~25ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(0ns, 20~25ns) is 5.66~13.0 times that of (t1, t2)=(0ns, 0ns).

o   For the case of (t1, t2)=(0ns, 50ns), evaluation results submitted to RAN1#113 show the positioning error of (t1, t2)=(0ns, 50ns) is 10.62~51.52 times that of (t1, t2)=(0ns, 0ns).

Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.

 

 

R1-2308359         Summary #5 of Evaluation on AI/ML for positioning accuracy enhancement      Moderator (Ericsson)

From Friday session

Observation

Evaluation of TRP reduction for both direct AI/ML positioning and AI/ML assisted positioning shows that: identification of the active TRPs is beneficial for Approach 2-B. Otherwise, the model suffers from poor performance in terms of positioning accuracy.

 

Observation (Updated Observation made in RAN1#113)

For direct AI/ML positioning, the evaluation of positioning accuracy at model inference is affected by the type of model input and AI/ML complexity. For a given AI/ML model design, there is a tradeoff between model input, AI/ML complexity (model complexity and computational complexity), and positioning accuracy. Evaluation results show that if changing model input type while holding other parameters (e.g., Nt, N't, Nport, N'TRP) the same,

 

Observation

Based on evaluation results of [3 sources: Ericsson, Apple, Qualcomm], direct AI/ML positioning and AI/ML assisted positioning can achieve comparable performance when simulation assumptions and parameters (e.g., clutter parameter, model input type, model input size, training dataset size, model complexity) are held the same, Edirect = (0.57~1.14) ´ Eassisted, where

·       Eassisted (meters) is the horizontal positioning accuracy at CDF=90% of AI/ML assisted positioning with multi-TRP construction with timing information as model output,

·       Edirect (meters) is the horizontal positioning accuracy at CDF=90% of direct AI/ML positioning.

 

 

Final summary in R1-2308628.

R1-2308683         Evaluation results of AI/ML for positioning accuracy enhancement       Moderator (Ericsson)

9.2.4.22       Other aspects on AI/ML for positioning accuracy enhancement

Including potential specification impact.

 

R1-2306418         Discussion on other aspects on AI/ML for positioning accuracy enhancement       New H3C Technologies Co., Ltd.

R1-2306455         Other Aspects of AI/ML Based Positioning Enhancement              Ericsson

R1-2306480         AI and ML for positioning enhancement       NVIDIA

R1-2306516         Discussion on AI/ML for positioning accuracy enhancement              Huawei, HiSilicon

R1-2306641         Discussion on other aspects on AIML for positioning accuracy enhancement       Spreadtrum Communications

R1-2306745         Other aspects on AI/ML for positioning accuracy enhancement              vivo

R1-2306800         Discussion on other aspects for AI positioning enhancement              ZTE

R1-2306905         Remaining issues on AI/ML for positioning accuracy enhancement       Sony

R1-2306962         On Enhancement of AI/ML based Positioning            Google

R1-2307017         Other aspects on AI/ML for positioning accuracy enhancement              LG Electronics

R1-2307081         Discussion on other aspects for AI/ML positioning accuracy enhancement       CATT

R1-2307133         Discussion on AI/ML for positioning accuracy enhancement              NEC

R1-2307158         Discussions on specification impacts for AIML positioning accuracy enhancement       Fujitsu

R1-2307188         Discussion on other aspects on AI/ML for positioning accuracy enhancement       CMCC

R1-2307236         On potential AI/ML solutions for positioning              Fraunhofer IIS, Fraunhofer HHI

R1-2307243         Other aspects on ML for positioning accuracy enhancement              Nokia, Nokia Shanghai Bell

R1-2307273         On Other aspects on AI/ML for positioning accuracy enhancement       Apple

R1-2307342         Other aspects on AI-ML for positioning accuracy enhancement              Baicells

R1-2307380         Views on the other aspects of AI/ML-based positioning accuracy enhancement       xiaomi

R1-2307470         Discussion on other aspects on AI/ML for positioning accuracy enhancement       NTT DOCOMO, INC.

R1-2307569         On sub use cases and other aspects of AI/ML for positioning accuracy enhancement       OPPO

R1-2307583         Designs and potential specification impacts of AIML for positioning          InterDigital, Inc.

R1-2307673         Discussion on potential specification impact for Positioning              Samsung

R1-2307811         AI/ML Positioning use cases and associated Impacts  Lenovo

R1-2307864         Discussions on AI-ML for positioning accuracy enhancement              CAICT

R1-2307921         Other aspects on AI/ML for positioning accuracy enhancement              Qualcomm Incorporated

R1-2308057         Other Aspects on AI ML Based Positioning Enhancement              MediaTek Inc.

 

R1-2308254         FL summary #1 of other aspects on AI/ML for positioning accuracy enhancement    Moderator (vivo)

From Tuesday session

Agreement

Regarding data collection for AI/ML based positioning, at least the following information of data with potential specification impact are identified.

Corresponding Working Assumption does not need to be confirmed

 

Observation

For direct AI/ML positioning with LMF-side model (Case 2b and 3b), the following types of measurement report are identified if beneficial and necessary (e.g., tradeoff positioning accuracy requirement and signaling overhead),

 

 

R1-2308255         FL summary #2 of other aspects on AI/ML for positioning accuracy enhancement    Moderator (vivo)

From Wednesday session

Observation

Regarding monitoring for AI/ML based positioning, at least the following type of monitoring metrics have been studied

 

 

R1-2308256         FL summary #3 of other aspects on AI/ML for positioning accuracy enhancement    Moderator (vivo)

From Thursday session

Observation

For direct AI/ML positioning with LMF-side model (Case 2b and 3b), the following types of measurement report with potential specification impact have been studied for AI/ML based positioning accuracy enhancement

 


 RAN1#114-bis

8.14   Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR Air Interface

R1-2310540         Session notes for 8.14 (Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface) Ad-hoc Chair (CMCC)

Friday decision: The session notes are endorsed and contents reflected below with modification.

MCC note: It is RAN1 understanding that the yellow highlighted text under 8.14 means "not agreed" and may require further discussion in the next meeting.

 

Please refer to RP-221348 for detailed scope of the SI.

 

[114bis-R18-AI/ML] – Taesang (Qualcomm)

Email discussion on AI/ML

-        To be used for sharing updates on online/offline schedule, details on what is to be discussed in online/offline sessions, tdoc number of the moderator summary for online session, etc

 

R1-2308837         Text Proposals to TR 38.843           Ericsson

R1-2310163         Updated TR 38.843 after RAN1#114b        Qualcomm Incorporated

Friday decision: The updated TR 38.843 is endorsed.

8.14.1    General aspects of AI/ML framework

Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.

 

R1-2308915         Remaining issues on general aspects of AI/ML framework              Huawei, HiSilicon

R1-2308937         Discussion on remaining open issues of AI/ML for air-interface general framework             FUTUREWEI

R1-2308954         Evaluation on AI/ML for CSI feedback enhancement RAN1, Comba

R1-2309002         Discussion on general aspects of AIML framework    Spreadtrum Communications

R1-2309093         Discussions on AI/ML framework  vivo

R1-2309143         Discussion on general aspects of common AI PHY framework              ZTE

R1-2309167         Remaining issues on AI/ML framework        LG Electronics

R1-2309184         Discussion on general aspects of AI/ML framework   Ericsson

R1-2309185         General aspects of AI and ML framework for NR air interface              NVIDIA

R1-2309204         General aspects of AI/ML framework for NR air interface              Intel Corporation

R1-2309249         General aspects of AI/ML framework for NR air interface              Baicells

R1-2309259         On General Aspects of AI/ML Framework   Google

R1-2309285         Discussion on general aspects of AI ML framework   NEC

R1-2309337         Remaining issues on general aspects of AI/ML framework              Panasonic

R1-2309396         Samsung's view on the remaining general aspects of AI/ML framework           Samsung

R1-2309437         Discussion on the remaining issues of AI/ML framework              xiaomi

R1-2309507         On general aspects of AI/ML framework      CATT

R1-2309544         Discussions on general aspects of AI/ML framework Mavenir

R1-2309617         On general aspects of AI/ML framework      OPPO

R1-2309643         Discussion on general aspects of AI/ML framework   Fujitsu

R1-2309689         Discussion on general aspects of AI/ML framework   CMCC

R1-2309723         Discussion on general aspects of AI/ML framework   Continental Automotive

R1-2309770         Discussions on general aspects of AI/ML framework Ruijie Network Co. Ltd

R1-2309773         Discussion on general aspects of AI/ML framework   Sharp

R1-2309806         Considering on system architecture for AI/ML framework deployment          TCL

R1-2309854         Discussion on general aspect of AI/ML framework     Apple

R1-2309873         Prediction of untransmitted beams in a UE-side AI-ML model              Rakuten Symphony

R1-2309886         General aspects of AI/ML framework           Fraunhofer IIS, Fraunhofer HHI

R1-2309911         Remaining issues on general AI/ML framework         Sony

R1-2309951         On general aspects of AI/ML framework      Lenovo

R1-2309955         Discussion on general aspects of AI/ML framework   InterDigital, Inc.

R1-2310052         Discussion on general aspects of AI/ML framework   NTT DOCOMO, INC.

R1-2310080         General Aspects of AI/ML framework          AT&T

R1-2310183         General aspects of AI/ML framework           Qualcomm Incorporated

R1-2310184         On Functionality and Model ID -based LCM Nokia, Nokia Shanghai Bell

R1-2310234         On General Aspects of AI/ML Framework   IIT Kanpur, Indian Institute of Tech (M)

R1-2310237         Discussions on General Aspects of AI/ML Framework              Indian Institute of Tech (M), IIT Kanpur

 

R1-2310368         Summary#1 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Monday session

Agreement

Model-ID, if needed, can be used in a Functionality (defined in functionality-based LCM) for LCM operations.

 

 

R1-2310369         Summary#2 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

Presented in Tuesday session.

 

R1-2310370         Draft reply LS on Data Collection Requirements and Assumptions       Moderator (Qualcomm)

R1-2310371         Summary#3 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Wednesday session

Agreement

For CSI compression (For reply LS)

LCM purpose

Data content

Typical data size (per data sample)

Typical latency requirement

Notes

Training

Target CSI

See Notes 1, 2

Relaxed

This row applies to Type 1, Type 2, and the first or second stage of described procedure of Type 3 separate training.

CSI Feedback

See Note 3

Relaxed

This is for dataset delivery for the second stage of described procedure of Type 3 separate training (either from Network side to UE side, or from UE side to Network side) and forward propagation information for Type 2 training.

See Note 7

Gradients for CSI Feeback

No agreement

Relaxed

This is for backward propagation for Type 2 training

See Note 7

Inference

CSI Feedback

See Note 3

Time-critical

Can use L1 report similar to legacy CSI

Monitoring

Reconstructed CSI from NW to UE

See Note 6

No agreement; [expected to be similar to target CSI for monitoring]

Near-real-time

This is called “UE-sided monitoring” in RAN1.

Calculated performance metrics

See Note 6

See Note 4

Near-real-time

This is called “UE-sided monitoring” in RAN1.

Target CSI

See Note 6

 See Notes 1, 2

Near-real-time

This is called “NW-sided monitoring” in RAN1.

 

Note 1: Target CSI may be precoding matrix or channel matrix. RAN1’s reply for data size is based on precoding Matrix which has been more widely evaluated than channel matrix.

Note 2: Data size for target CSI depends on the format. There is no agreement on the format or necessary precision of the target CSI. Some examples based on companies’ evaluations are: eType-II format (up to ~1000 bits), eType-II-like format (~ a few 1000 bits), and float32 format (up to ~ 150K bits). The data size may also vary depending on the configuration, and the captured value indicates the order of magnitude of the typical data size per sample as a guideline.

Note 3: There is no agreement on the CSI feedback size. Values in the order of eType II payload size may be assumed (up to ~ 1000 bits) for RAN2 discussion.

Note 4: There is no agreement on the exact metric or reporting format. An example based on companies’ evaluations is: SGCS (10s of bits)

Note 5: There are no agreements on the reporting type.

Note 6: Feasibility and necessity of the monitoring schemes listed in the table are under discussion

Note 7: RAN1 has agreed to deprioritize Type 2 training over the air interface.

 

Note(serve as trace in session notes)

Data size for target CSI depends on the format and configuration, for examples,

 

Agreement

For CSI prediction at UE side (For reply LS)

LCM purpose

Data content

Typical data size (per data sample)

Typical latency requirement

Notes

Training

Target CSI in observation and prediction window

See Notes 1, 2

Relaxed

 

Inference

Predicted CSI feedback (AI/ML output)

See Note 3

Time-critical

Can use L1 report similar to legacy CSI

Monitoring

Ground truth (i.e., target CSI) corresponding to predicted CSI

See Note 6

See Notes 1, 2

Near-real-time

 

Calculated performance metrics / Performance monitoring output

See Note 6

See Note 5

Near-real-time

 

 

Note 1: Target CSI may be precoding matrix or channel matrix. RAN1’s reply for data size is based on channel matrix which has been more widely evaluated than precoding Matrix.

Note 2: Data size for target CSI depends on the format. There is no agreement on the format or precision of the target CSI. The data size may also vary depending on the configuration, and the captured value indicates the order of magnitude of the typical data size per sample as a guideline. One example based on companies’ evaluations is up to around 1.5Mbits, assuming float 32 and 10 CSI-RS observation instances as input to predict one future CSI instance.

Note 3: There is no agreement on the predicted CSI feedback size. Values in the order of eType II payload size may be assumed (up to ~ 1000 bits) for RAN2 discussion.

Note 4: There are no agreements on the reporting type.

Note 5: There is no agreement on the performance metric or monitoring output details.

Note 6: Feasibility and necessity of the monitoring schemes listed in the table are under discussion.

 

Note (serve as trace in session notes)

Data size for target CSI depends on the format and configuration, for examples,

·       In floating point format (32 bits per sample), the channel matrix for 4 layers, 19 subbands (one matrix per subband), 32 ports needs around 150 kilobits per CSI-RS instance. Assuming 10 CSI-RS observation instances as input to predict one future CSI instance, the total is around 1.5M bits. This number doesn’t account for any potential compression techniques.

Agreement

For an AI/ML-enabled feature/FG, additional conditions refer to any aspects that are assumed for the training of the model but are not a part of UE capability for the AI/ML-enabled feature/FG.

·       It doesn’t imply that additional conditions are necessarily specified

Agreement

·       Additional conditions can be divided into two categories: NW-side additional conditions and UE-side additional conditions.

·       Note: whether specification impact is needed is separate discussion

 

R1-2310372         Summary#5 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Thursday session

Agreement

For Beam management (For reply LS)

LCM purpose

UE-side/NW-side models

Data content

Typical data size (per data sample)

Typical latency requirement

Notes

Training

UE-side, NW-side

L1-RSRPs and/or beam-IDs

 

See Note 1 for L1-RSRPs

Relaxed

 

 

 

Inference

UE-side

Beam prediction results

 

Small (10s of bits)

Time-critical

RAN1 has agreed to consider L1 signalling for this reporting

NW-side

L1-RSRPs, and Beam-IDs if needed, for Set B

See Note 1 for L1-RSRPs

Time-critical

Monitoring

UE-side

Event occurrence and/or calculated performance metrics (from UE to NW)

See Note 4

Small (10s of bits)

Near-real-time

 

UE-side

L1-RSRP(s) and/or beam-ID(s)

See Note 4

Up to 10 bits, or up to 100 bits, or up to hundreds of bits.

See Note 1 for L1-RSRPs

Near-real-time

 

NW-side

L1-RSRP(s) and/or beam-ID(s)

 

See Note 4

Up to 10 bits, or up to 100 bits, or up to hundreds of bits.

See Note 1 for L1-RSRPs

Near-real-time

 

 

Note 1: There is no agreement on the data size of L1-RSRPs for Set A or Set B, but the following typical data size is provided as guidance for RAN2 discussion. Based on existing L1-RSRP reporting methodology, i.e., 7 bits for the strongest beam and 4 bits for the remaining beams, for Set B = 16 as an example, the typical data size would be 67 (hence up to ~100 bits), and for Set A = 128 as an example, the typical data size would be 515 (hence up to ~500 bits) if all beams in Set A were to be collected. For BM Case 2, the data size L1-RSRPs for Set A and Set B represents the data size per predicted future time instance and per history measurement time instance, respectively. Payload size may not be fixed.

Note 2: There are no agreements on the reporting type.

Note 4: Feasibility and necessity of the monitoring schemes listed in the table are under discussion.

Note 5: For BM Case 2, the typical value of the number of history measurement time instance used in evaluations is up to 8 and typical value of the number of predicted future time instance is 1~4.

 

 

R1-2310373         Summary#6 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Friday session

Agreement

For positioning (For reply LS)

LCM purpose

Case

Data content

Typical data size (per data sample)

Typical latency requirement

Notes

Training

All Cases

 

 

Measurements (corresponding to model input): timing, power, and/or phase info

See Note 2

Size depends on number of PRS/SRS resources, measurement type (timing, power, and/or phase info) and report format:

~100 bits to 1000s bits per PRS/SRS resource

See Note 3

Relaxed

 

Direct AI/ML positioning

Label: Location coordinates as model output

56 to 144 bits

See Note 3

Relaxed

 

 

AI/ML assisted positioning

Label: Intermediate positioning measurement (timing info, LOS/NLOS indicator) as model output

See Note 2

10s bits to 100s bits per PRS/SRS resource

See Note 3

Relaxed

 

Inference

1

Location coordinates as model output

56 to 144 bits

See Note 3

See Note 5

 

2a, 3a

Intermediate positioning measurement (timing info, LOS/NLOS indicator) as model output

See Note 2

10s bits to 100s bits per PRS/SRS resource

See Note 3

See Note 5

 

2b, 3b

Measurements (corresponding to model input):

Timing, power, and/or phase info

See Note 2

Size depends on number of PRS/SRS resources, measurement type (timing, power, and/or phase info) and report format:

~100 bits to 1000s bits per PRS/SRS resource

See Note 3

See Note 5

 

Monitoring

All Cases

See Note 8

See Note 8

Near-real-time

See Note 6 and 7

 

Note 1: The necessity and feasibility of difference cases (Case1 to Case3b) needs further discussion/conclusion.

Note 2: For measurements as model input, no agreement on measurement types (i.e., time, power, and/or phase) in RAN1 for all cases (i.e., Case1 to Case3b). Measurement types (including their necessity) and sizes/dimension needs to be further discussed. Candidate measurement types discussed/evaluated for model input include CIR (contains timing, power and phase information), PDP (contains timing and power information), DP (contains timing information). For labels (i.e., model output) of AI/ML assisted positioning (Case2a, Case3a), RAN1 identified an initial listing of candidates that provide performance benefits (i.e., timing info, LOS/NLOS indicator). RSRP/RSRPP is for further discussion.

Note 3: The measurement size of one data sample = (measurement data size of one PRS/SRS resource)*(number of PRS/SRS resources needed for model input). The label size of one data sample = (label data size of one PRS/SRS resource)*(number of PRS/SRS resources needed for model output). The quantization and bit representation of time, power, and phase information (including their necessity) still need to be further discussed.  Existing specification allows reporting of up to 64 PRS/SRS resources per frequency layer for one positioning fix. For evaluations, most companies considered up to 18 TRPs. It should be noted that AI/ML positioning is not restricted to work only with maximum of 18 TRPs.

Note 4: No agreement on reporting types (i.e., periodicity, event-triggered/on-demand, etc.).

Note 5: There are no agreements on the reporting latency.

Note 6: RAN1 agreed on an initial listing of entities that can derive the monitoring metric for AI/ML positioning for different cases (Case1 to Case3b):

 -1: At least UE derives monitoring metric

 -2a: At least UE and LMF (based on ground truth) derives monitoring metric

     - LMF (if monitoring based on ground truth)

 -3a: At least gNB/TRP and LMF (based on ground truth) derives monitoring metric

     - LMF (if monitoring based on ground truth)

 -2b and 3b: At least LMF derives monitoring metric

Note 7: No agreement yet on a monitoring decision entity or their mapping to other entities (e.g., entity running the inference, entity deriving the monitoring metric, etc.).

Note 8: RAN1 has studied several types of related statistics where potential request/report of Monitoring related statistics and its necessity are for further discussion.

 

Agreement

For drafting LS,

This LS reply is meant to capture existing RAN1 agreements/conclusions/observations and discussions for the purpose of replying the RAN2 LS; The LS reply does not serve as additional agreements/conclusions/observations beyond what RAN1 has already agreed/concluded/observed.

 

Agreement

Common Notes for all sub-use-cases:

·       In answering latency requirements, RAN1 used the following descriptions:

o   Relaxed (e.g., minutes, hours, days, or no latency requirement)

o   Near-real-time (e.g., several tens of msecs to a few seconds)

o   Time-critical (e.g., a few msecs)

·       In the reply, RAN1 captured the typical data size per each data sample.

·       Model training is assumed to be offline training.

·       In RAN1’s answer, RAN1 did not list assistance information. RAN1 has informed RAN2 of related conclusions/agreements/observations regarding assistance information in the RAN1 response to Part A.

·       There may be other information identified for training not included in the tables. For example, in positioning enhancement, some information has been considered as potential spec impact (e.g., quality indicators, time stamps, RS configuration(s)).

·       In this reply for Part B, the term 'NW-side monitoring' is not explicitly used since RAN1’s understanding of the term is not fully aligned with RAN2 terminology. Rather, RAN1 explained directly the data contents for monitoring. It should also be noted that in the RAN1 response to part A, RAN1 used the term ‘NW-sided monitoring’ aligned with RAN2.

·       For monitoring, RAN1 provided replies only for near-real-time monitoring. The requirements for data collection for relaxed monitoring, if necessary, can be considered to be similar to offline training requirements.

Agreement

For inference for UE-side models, to ensure consistency between training and inference regarding NW-side additional conditions (if identified), the following options can be taken as potential approaches (when feasible and necessary):

·       Model identification to achieve alignment on the NW-side additional condition between NW-side and UE-side

·       Model training at NW and transfer to UE, where the model has been trained under the additional condition

·       Information and/or indication on NW-side additional conditions is provided to UE

·       Consistency assisted by monitoring (by UE and/or NW, the performance of UE-side candidate models/functionalities to select a model/functionality)

·       Other approaches are not precluded

·       Note: it does not deny the possibility that different approaches can achieve the same function.

 

R1-2310638         Draft reply LS on Data Collection Requirements and Assumptions       Moderator (Qualcomm)

Friday decision: The draft LS to RAN2 is endorsed. Final version is approved in R1-2310681.

 

R1-2310681         Reply LS on Data Collection Requirements and Assumptions              RAN1, Qualcomm

 

 

R1-2310374         Final summary of General Aspects of AI/ML Framework              Moderator (Qualcomm)

8.14.2    Other aspects on AI/ML for CSI feedback enhancement

Including potential specification impact. Consider RAN agreement from RAN#100 in RP-231481 (proposal 1).

 

R1-2308873         Discussions on AI-CSI       Ericsson

R1-2308916         Remaining issues on AI/ML for CSI feedback enhancement              Huawei, HiSilicon

R1-2308938         Discussion on remaining open issues for other aspects of AI/ML for CSI feedback enhancement        FUTUREWEI

R1-2309003         Discussion on other aspects on AIML for CSI feedback              Spreadtrum Communications

R1-2309094         Other aspects on AI/ML for CSI feedback enhancement              vivo

R1-2309144         Discussion on other aspects for AI CSI feedback enhancement              ZTE

R1-2309168         Remaining issues on AI/ML for CSI enhancement      LG Electronics

R1-2309186         AI and ML for CSI feedback enhancement   NVIDIA

R1-2309207         Discussion on AI/ML for CSI feedback         Intel Corporation

R1-2309260         On Enhancement of AI/ML based CSI           Google

R1-2309271         Other aspects on AI/ML for CSI feedback enhancement              NEC

R1-2309397         Samsung's view on remaining aspects on AI/ML for CSI feedback enhancement       Samsung

R1-2309438         Remaining issues discussion on specification impact for CSI feedback based on AI/ML xiaomi

R1-2309508         On other aspects for AI/ML CSI feedback enhancement              CATT

R1-2309558         Discussion on AI/ML for CSI feedback enhancement China Telecom

R1-2309618         On other aspects of AI/ML for CSI feedback enhancement              OPPO

R1-2309631         Discussion on AI/ML for CSI feedback enhancement Panasonic

R1-2309644         Views on specification impact for CSI feedback enhancement              Fujitsu

R1-2309690         Discussion on other aspects on AI/ML for CSI feedback enhancement       CMCC

R1-2309855         Discussion on other aspects of AI/ML for CSI enhancement              Apple

R1-2309869         Discussions on CSI measurement enhancement for AI/ML communication   TCL

R1-2309872         Varying CSI feedback granularity based on channel conditions              Rakuten Symphony

R1-2309912         Remaining issues on CSI measurement enhancements via AI/ML              Sony

R1-2309931         Other aspects on AI/ML for CSI feedback enhancement              ITL

R1-2309952         Further aspects of AI/ML for CSI feedback  Lenovo

R1-2309956         Discussion on AI/ML for CSI feedback enhancement InterDigital, Inc.

R1-2309997         Other aspects on AI/ML for CSI feedback enhancement              MediaTek Inc.

R1-2310053         Discussion on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.

R1-2310081         Discussion on AI/ML for CSI feedback enhancement AT&T

R1-2310164         Other aspects on AI/ML for CSI feedback enhancement              Qualcomm Incorporated

R1-2310185         Other aspects on AI/ML for CSI feedback enhancement              Nokia, Nokia Shanghai Bell

R1-2310238         Discussions on Other Aspects on AI/ML for CSI Feedback Enhancement       Indian Institute of Tech (M), IIT Kanpur, CEWiT

 

R1-2310312         Summary #1 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Monday session

 

In CSI compression using two-sided model use case, the following table captures the pros/cons of training collaboration types 2 and type 3:

 

Training types

Characteristics

Type 2

Type 3

Simultaneous

Sequential

NW first (note 1)

NW first

 UE first

Whether model can be kept proprietary

Yes (note 2)

Yes (note 2)

Yes (note 2)

Yes (note 2)

Whether require privacy-sensitive dataset sharing

No (Note 3)

No (Note3)

No (Note 3)

No (Note 3)

Flexibility to support cell/site/scenario/configuration specific model

Difficult

 

FFS

 

FFS  

FFS

 

Whether gNB/device specific optimization is allowed

Yes

 

Yes

Yes

Yes

Model update flexibility after deployment (note 4)

Not flexible

 

 

Semi-flexible. Less flexible compared to type 3

Semi-flexible

 

Semi-flexible.

 

Feasibility of allowing UE side and NW side to develop/update models separately

Infeasible

 

FFS

 

FFS

FFS

Whether gNB can maintain/store a single/unified model over different UE vendors for a CSI report configuration

Yes. Performance refers to observations in “1 NW part model to M>1 UE part models” and “1 UE part model to N>1 NW part models” of Section 6.2.2.4, TR38.843

Yes. Performance refers to observations in “1 NW part model to M>1 UE part models” of Section 6.2.2.4, TR38.843

Yes. Performance refers to observations

in “NW first training, 1 NW part model to 1 UE part model, same backbone”, and “NW first training, 1 NW part model to 1 UE part model, different backbones” of Section 6.2.2.5, TR38.843

Yes. 

Performance refers to observations in “UE first training, M>1 UE part models to 1 NW part model” of Section 6.2.2.5, TR38.843

Whether UE device can maintain/store a single/unified model over different NW vendors for a CSI report configuration

Yes. Performance loss refers to9.2.2.1 observations in “1 NW part model to M>1 UE part models” and “1 UE part model to N>1 NW part models” of Section 6.2.2.4, TR38.843

 

Yes.Performance loss refers to9.2.2.1  observations

in “1 NW part model to M>1 UE part models” of Section 6.2.2.4, TR38.843

Yes per camped cell.

Generalization over multiple NW, performance loss refers to9.2.2.1 observations in “NW first training, 1 UE part model to N>1 NW part models” of Section 6.2.2.5, TR38.843

Yes.

Performance loss refers to9.2.2.1  observations in “UE first training, 1 NW part model to 1 UE part model, same backbone”, and “UE first training, 1 NW part model to 1 UE part model, different backbones”  of Section 6.2.2.5, TR38.843

Extendibility: to train new UE-side model compatible with NW-side model in use;

Not support

 

 

Support

Support

 

FFS

Extendibility: To train new NW-side model compatible with UE-side model in use

Not support

 

 

Not Support

FFS

Support

Whether training data distribution can match the inference device

More limited

 

FFS

 

Limited

 

Yes

Software/hardware compatibility (Whether device capability can be considered for model development)

Compatible

Compatible

Compatible

Compatible

Model performance based on evaluation in 9.2.2.1

Performance  refers to 9.2.2.1 observations

Performance  refers to 9.2.2.1 observations

Performance refers to 9.2.2.1 observations

Performance  refers to 9.2.2.1 observations

 

In CSI compression using two-sided model use case, the following table captures the pros/cons of training collaboration types 1:

 

Training types

Characteristics

Type1: NW side

Type 1: UE side

Unknown model structure at UE

Known model structure at UE

Unknown model structure at NW

Known model structure at NW

Whether model can be kept proprietary

No

No

No

No

Whether require privacy-sensitive dataset sharing

No (note 3)

No (note 3)

No (note 3)

No (note 3)

Flexibility to support cell/site/scenario/configuration specific model

FFS

 

 

FFS

 

FFS

 

FFS

 

Whether gNB/device specific optimization is allowed

gNB: Yes

UE: No

 

gNB: Yes

UE: FFS

gNB: No

UE: Yes

 

UE: Yes

gNB: FFS

Model update flexibility after deployment

Flexible

 

 

Flexible for parameter update

Flexible

less flexible than Type 1 NW side

Flexible for parameter update.

less flexible than Type 1 NW side

Feasibility of allowing UE side and NW side to develop/update models separately

FFS

FFS

FFS

FFS

Whether gNB can maintain/store a single/unified model over different UE vendors for a CSI report configuration

Yes

Yes for gNB-part model. FFS for UE-part model.

No

No

Whether UE device can maintain/store a single/unified model over different NW vendors for a CSI report configuration

Yes per camped cell. 

No

 

 

Yes per camped cell. 

No

Yes

Yes

Extendibility: to train new UE-side model compatible with NW-side model in use;

 

FFS

 

 

FFS

 

 

FFS

 

 

FFS

 

Extendibility: To train new NW-side model compatible with UE-side model in use

 

 

FFS

 

 

 

FFS

 

 

 

FFS

 

 

 

FFS

 

Whether training data distribution can match the inference device

 

FFS

 

 

 

FFS

 

 

FFS

 

 

FFS

 

Software/hardware compatibility (Whether device capability can be considered for model development)

 

 

FFS

 

 

 

FFS

 

 

 

FFS

 

 

 

FFS

 

Model performance based on evaluation in 9.2.2.1

Performance  refers to 9.2.2.1 observations

Performance  refers to 9.2.2.1 observations

Performance refers to 9.2.2.1 observations

Performance  refers to 9.2.2.1 observations

 

Note 2: Assume information on model structure disclosed in training collaboration does not reveal proprietary information.

Note 3: Assume precoding matrix is not privacy sensitive data. FFS: other information such as channel matrix and assisted information.

 

 

R1-2310313         Summary #2 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Tuesday session

Agreement

In CSI compression using two-sided model use case with training collaboration type 3, for sequential training, at least the following aspects have been identified for dataset delivery from RAN1 perspective, including:  

·       Dataset and/or other information delivery from UE side to NW side, which can be used at least for CSI reconstruction model training

·       Dataset and/or other information delivery from NW side to UE side, which can be used at least for CSI generation model training

·       Potential dataset delivery methods including offline delivery, and over the air delivery

·       Data sample format/type

·       Quantization/de-quantization related information

 

R1-2310314         Summary #3 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Wednesday session

Agreement

Specification support of Quantization alignment for CSI feedback between CSI generation part at the UE and CSI reconstruction part at the NW is needed for supporting CSI compression using two-sided model use case, e.g.,

·       through model pairing process,

·       alignment based on standardized quantization scheme.

·       Additional methods are not precluded.

Agreement

·       In CSI compression using two-sided model use case, for CSI report format, when output-CSI-UE and input-CSI-NW is precoding matrix, CSI part 1 includes at least CQI for first codeword, RI, and information representing the part 2 size. CSI part 2 includes at least the content of CSI generation part output.

·       Other CSI report formats are not precluded

Agreement

·       Modify row item in previous conclusion from “Whether gNB can maintain/store a single/unified model” to “Whether gNB can maintain/store a single/unified CSI reconstruction model over different UE vendors”.

·       Modify row item in previous conclusion from “Whether UE device can maintain/store a single/unified model” to “Whether UE device can maintain/store a single/unified CSI generation model over different NW vendors”.

Training types

Characteristics

Type 2

Simultaneous

Sequential

NW first (note 1)

Flexibility to support cell/site/scenario/configuration specific model

No consensus

 

 

No consensus

 

Model update flexibility after deployment (note 4)

Not flexible

 

 

No consensus.

Feasibility of allowing UE side and NW side to develop/update models separately

Infeasible

 

No consensus

 

Extendibility: To train new NW-side model compatible with UE-side model in use

Not support

 

 

Not Support

Whether training data distribution can match the inference device

No consensus

Yes for UE-part model,

Limited for NW-part model.

 

 

R1-2310315         Summary #4 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

R1-2310316         Summary #5 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Friday session

 

Training types

Characteristics

Type 3

NW first

 UE first

Flexibility to support cell/site/scenario/configuration specific model

[Semi] flexible except for UE defined scenarios. (note x1)

 

[Semi] flexible for UE defined scenarios if UE assistance information is supported and available.

[Semi] flexible except for NW defined scenarios (note x1).

 

[Semi] flexible for NW defined scenarios if NW assistance information is supported and available.

Feasibility of allowing UE side and NW side to develop/update models separately

Feasible. 

Feasible

Extendibility: to train new UE-side model compatible with NW-side model in use;

Support

Not support (note x2)

Extendibility: To train new NW-side model compatible with UE-side model in use

Not support (note x2)

Support

Note x1: For this table, NW defined scenarios are scenarios with NW defined dataset categorization. UE defined scenarios are scenarios with UE defined dataset categorization. [Semi] means no consensus for including “semi”.

Note x2: Extendibility can be achieved by combining different training collaboration type 3.

 

Training types

Characteristics

Type1: NW side

Type 1: UE side

Unknown model structure at UE

Known model structure at UE

Unknown model structure at NW

Known model structure at NW

Flexibility to support cell/site/scenario/configuration specific model

Flexible except for UE defined scenarios.

 

Not flexible for UE defined scenarios unless UE assistance information is supported and available.

Flexible except for UE defined scenarios.

 

Not Flexible for UE defined scenarios unless

UE assistance information is supported and available.

Flexible except for NW defined scenarios.

 

Not flexible for NW defined scenarios unless NW assistance information is supported and available.

Flexible except for NW defined scenarios.

 

Not flexible for NW defined scenarios unless NW assistance information is supported and available.

 

Training types

Characteristics

Type1: NW side

Type 1: UE side

Unknown model structure at UE

Known model structure at UE

Unknown model structure at NW

Known model structure at NW

Whether training data distribution can match the inference device

Limited

Limited

Yes

Yes

Software/hardware compatibility (Whether device capability can be considered for model development)

No for UE

Yes

No for NW

Yes

 

Training types

Characteristics

Type1: NW side

Type 1: UE side

Unknown model structure at UE

Known model structure at UE

Unknown model structure at NW

Known model structure at NW

Whether gNB/device specific optimization is allowed

gNB: Yes

UE: No

gNB: Yes

UE: less flexible compared to UE side

gNB: No

UE: Yes

UE: Yes

gNB: less flexible compared to NW side

Model update flexibility after deployment

Flexible only if UE supports the new structure

Flexible for parameter update

Flexible

less flexible than Type 1 NW side

Flexible for parameter update.

less flexible than Type 1 NW side

Whether gNB can maintain/store a single/unified CSI reconstruction model over different UE vendors (note x3)

Yes

Yes

Performance refers to observations in “1 NW part model to M>1 UE part models” of Section 6.2.2.4, TR38.843 (note x5)

No

No

Whether UE device can maintain/store a single/unified CSI generation model over different NW vendors (note x4)

No  

No

Yes

Yes

Performance refers to observations in “1 UE part model to N>1 NW part models” of Section 6.2.2.4, TR38.843 (note x5)

 

Note x3: Whether gNB/UE needs to maintain/store multiple CSI generation/reconstruction models respectively, is not discussed.

Note x4: For model inference, UE does not need to use multiple models from different NW vendors per cell.

Note x5: 1 to many joint trainings is assumed.

 

Training types

Characteristics

Type 2

Type 3

Simultaneous

Sequential

NW first (note 1)

NW first

 UE first

Whether gNB can maintain/store a single/unified CSI reconstruction model over different UE vendors (note x3)

Yes. Performance refers to observations in “1 NW part model to M>1 UE part models” and “1 UE part model to N>1 NW part models” of Section 6.2.2.4, TR38.843

Yes. Performance refers to observations in “1 NW part model to M>1 UE part models” of Section 6.2.2.4, TR38.843

Yes. Performance refers to observations

in “NW first training, 1 NW part model to 1 UE part model, same backbone”, and “NW first training, 1 NW part model to 1 UE part model, different backbones” of Section 6.2.2.5, TR38.843

Yes. 

Performance refers to observations in “UE first training, M>1 UE part models to 1 NW part model” of Section 6.2.2.5, TR38.843

Whether UE device can maintain/store a single/unified CSI generation model over different NW vendors (note x4)

Yes. Performance refers to observations in “1 NW part model to M>1 UE part models” and “1 UE part model to N>1 NW part models” of Section 6.2.2.4, TR38.843

Yes. Performance refers to observations in “1 NW part model to M>1 UE part models” of Section 6.2.2.4, TR38.843

Performance refers to observations in “NW first training, 1 UE part model to N>1 NW part models” of Section 6.2.2.5, TR38.843

Yes. Performance refers to observations in “UE first training, 1 NW part model to 1 UE part model, same backbone”, and “UE first training, 1 NW part model to 1 UE part model, different backbones”  of Section 6.2.2.5, TR38.843

 

 

R1-2310317         Final summary on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

8.14.33    Remaining aspects on AI/ML

To be used for finalization of TR conclusions and/or recommendations on ‘Evaluation on AI/ML for CSI feedback enhancement’, ‘Evaluation on AI/ML for beam management’, ‘Other aspects on AI/ML for beam management’, ‘Evaluation on AI/ML for positioning accuracy enhancement’, and ‘Other aspects on AI/ML for positioning accuracy enhancement’. Contributions are to be submitted only by FLs.

 

R1-2308838         Remaining Aspects of AI/ML for Positioning Accuracy Enhancement       Ericsson

R1-2308917         Highlights for the evaluation on AI/ML based CSI feedback enhancement       Huawei, HiSilicon

R1-2308950         Remaining open aspects of AI/ML positioning           vivo

R1-2309398         Remaining aspects for evaluation of AI/ML for beam management        Moderator (Samsung)

R1-2309619         Other aspects on AI/ML for beam management          OPPO

 

R1-2310364         Summary #1 on Remaining Aspects of Evaluating AI/ML for Positioning Accuracy Enhancement Moderator (Ericsson)

R1-2310365         Summary #2 on Remaining Aspects of Evaluating AI/ML for Positioning Accuracy Enhancement           Moderator (Ericsson)

From Tuesday session

Agreement

Capture the following observations in TR 38.843, which are updated from the corresponding observations in RAN1#114.

For both direct AI/ML positioning and AI/ML assisted positioning, evaluation results show that:

·       Fine-tuning/re-training a previous model with dataset of the new deployment scenario improves the model performance for the new deployment scenario. For details on the amount of improvement, see other observations.

·       After fine-tuning/re-training a previous model with dataset of the new deployment scenario, the performance of the updated model degrades for the previous deployment scenario (e.g., previous clutter parameter setting) that the previous model was trained for.

o   Examples of the deployment scenario include: different drops, different clutter parameter, different InF scenarios

For both direct AI/ML positioning and AI/ML assisted positioning,

·       if the new deployment scenario is significantly different from the previous deployment scenario the model was trained for (e.g., different drops, different clutter parameter, different InF scenarios), fine-tuning a previous model requires similarly large training dataset size as training the model from scratch, in order to achieve the similar performance for the new deployment scenario.

·       If the new deployment scenario is NOT significantly different from the previous deployment scenario the model was trained for (e.g., 2ns difference in network synchronization error between the previous and the new deployment scenario), fine-tuning a previous model requires a small (e.g., x%=10%) training dataset size as compared to training the model from scratch, in order to achieve the similar performance for the new deployment scenario.

 

======================= Start of text proposal to TR 38.843 v1.0.0 ====================

6.4.2.3      Fine-tuning

Observations:

Direct AI/ML positioning

...

As a summary of the observations above, for direct AI/ML positioning, evaluation results show that:

·        Fine-tuning/re-training a previous model with dataset of the new deployment scenario improves the model performance for the new deployment scenario. For details on the amount of improvement, see the observations listed above.

·        After fine-tuning/re-training a previous model with dataset of the new deployment scenario, the performance of the updated model degrades for the previous deployment scenario (e.g., previous clutter parameter setting) that the previous model was trained for.

o    Examples of the deployment scenario include: different drops, different clutter parameter, different InF scenarios

For direct AI/ML positioning,

·        if the new deployment scenario is significantly different from the previous deployment scenario the model was trained for (e.g., different drops, different clutter parameter, different InF scenarios), fine-tuning a previous model requires similarly large training dataset size as training the model from scratch, in order to achieve the similar performance for the new deployment scenario.

·        If the new deployment scenario is NOT significantly different from the previous deployment scenario the model was trained for (e.g., 2ns difference in network synchronization error between the previous and the new deployment scenario), fine-tuning a previous model requires a small (e.g., x%=10%) training dataset size as compared to training the model from scratch, in order to achieve the similar performance for the new deployment scenario.

 

AI/ML assisted positioning

...

Both direct AI/ML positioning and AI/ML assisted positioning

As a summary of the observations above, for both direct AI/ML positioning and AI/ML assisted positioning, evaluation results show that:

·        Fine-tuning/re-training a previous model with dataset of the new deployment scenario improves the model performance for the new deployment scenario. For details on the amount of improvement, see the observations listed above.

·        After fine-tuning/re-training a previous model with dataset of the new deployment scenario, the performance of the updated model degrades for the previous deployment scenario (e.g., previous clutter parameter setting) that the previous model was trained for.

o    Examples of the deployment scenario include: different drops, different clutter parameter, different InF scenarios

For both direct AI/ML positioning and AI/ML assisted positioning,

·        if the new deployment scenario is significantly different from the previous deployment scenario the model was trained for (e.g., different drops, different clutter parameter, different InF scenarios), fine-tuning a previous model requires similarly large training dataset size as training the model from scratch, in order to achieve the similar performance for the new deployment scenario.

·        If the new deployment scenario is NOT significantly different from the previous deployment scenario the model was trained for (e.g., 2ns difference in network synchronization error between the previous and the new deployment scenario), fine-tuning a previous model requires a small (e.g., x%=10%) training dataset size as compared to training the model from scratch, in order to achieve the similar performance for the new deployment scenario.

 

<Unchanged text is omitted>

=======================  End of text proposal to TR 38.843 v1.0.0 ====================

 

Agreement

·       Adopt the text proposal below to describe the AI/ML methods used in evaluation in TR38.843.

======================= Start of text proposal to TR 38.843 v1.0.0 ====================

6.4             Positioning accuracy enhancements

6.4.1         Evaluation assumptions, methodology and KPIs

For AI/ML-based positioning evaluation, RAN1 does not attempt to define any common AI/ML model as a baseline.

 

For AI/ML based positioning, the following methods are evaluated.

(1)    Direct AI/ML positioning, see an example illustrated in Figure 6.4.1-1.

(2)    Assisted AI/ML positioning.

(a)    Assisted AI/ML positioning with multi-TRP construction, see an example illustrated in Figure 6.4.1-2.

(b)    Assisted positioning with single-TRP construction and one model for N TRPs, see an example illustrated in Figure 6.4.1-3.

(c)     Assisted positioning with single-TRP construction and N models for N TRPs, see an example illustrated in Figure 6.4.1-4.

 

Figure 6.4.1-1. Direct AI/ML positioning

 

Figure 6.4.1-2. Assisted positioning with multi-TRP construction

 

Figure 6.4.1-3. Assisted positioning with single-TRP construction, and one model for N TRPs.

 

Figure 6.4.1-4. Assisted positioning with single-TRP construction, and N models for N TRPs.

 

=======================  End of text proposal to TR 38.843 v1.0.0 ====================

 

Agreement

·       Adopt the text proposal to clarify that the AI/ML positioning methods can be used on the network side or the UE side. Evaluation results have been submitted for both by companies.

======================= Start of text proposal to TR 38.843 v1.0.0 ====================

<Unchanged text is omitted>

6.4.1         Evaluation assumptions, methodology and KPIs

For AI/ML-based positioning evaluation, RAN1 does not attempt to define any common AI/ML model as a baseline.

In the evaluation, some results use UE measurement information as model input, other results use gNB measurement information as model input, and they are not distinguished for summarizing the results.

<Unchanged text is omitted>

=======================  End of text proposal to TR 38.843 v1.0.0 ====================

 

 

R1-2310366         Summary #3 on Remaining Aspects of Evaluating AI/ML for Positioning Accuracy Enhancement           Moderator (Ericsson)

Presented in Wednesday session.

 

R1-2310487         Summary #4 on Remaining Aspects of Evaluating AI/ML for Positioning Accuracy Enhancement Moderator (Ericsson)

R1-2310488         Summary #5 on Remaining Aspects of Evaluating AI/ML for Positioning Accuracy Enhancement           Moderator (Ericsson)

From Friday session

Agreement

·       Adopt the text proposal below for high level summary of evaluations of AI/ML based positioning in the study item.

======================= Start of text proposal to TR 38.843 v1.0.0 ====================

6.4.2.6      Summary of Performance Results for Positioning accuracy enhancements

 

For the use case of positioning accuracy enhancement, extensive evaluations have been carried out. Both direct AI/ML positioning and AI/ML assited positioning are evaluated using one-sided model. The following areas are investigated.

  • Performance evaluation without generalization consideration, where the AI/ML model is trained and tested with dataset of the same deployment scenario.

o    AI/ML vs RAT-dependent positioning methods. For the basic performance without generalization consideration, AI/ML based positioning can significantly improve the positioning accuracy compared to existing RAT-dependent positioning methods. For example, in InF-DH with clutter parameter setting {60%, 6m, 2m}, AI/ML based positioning can achieve horizontal positioning accuracy of <1m at CDF=90%, as compared to >15m for conventional positioning method.

o    Impact of training data sample density (i.e., training dataset size for a given evaluation area). Evaluation with uniform UE distribution shows that, the larger the training dataset size (i.e., higher sample density), the smaller the positioning error (in meters), until a saturation point is reached where additional training data does not bring further improvement to the positioning accuracy.

  • AI/ML complexity. For a given company’s model design, in terms of model inference complexity (model complexity and computational complexity), a lower complexity model can still achieve acceptable positioning accuracy (e.g., <1m), albeit degraded, when compared to a higher complexity model.
  • Model input size reduction. Evaluations are carried out to examine various ways to change the model input size and its impact on positioning accuracy:

o    Different measurement type, for example, CIR, PDP, DP.

o    Different number of consecutive time domain samples, Nt.

o    Different number of non-zero samples N't selected from the Nt consecutive time domain samples (N't < Nt)..

o    Different number of active TRPs, N'TRP.

The model input size for various measurement type (CIR, PDP, DP) and dimensions (N'TRP, Nt, N't, Nport) is analyzed. Evaluation results show that, model input of different measurement type and dimensions can have different reporting overhead and positioning accuracy.

Fixed TRP pattern vs dynamic TRP pattern. Evaluation results show that, approaches supporting dynamic TRP pattern may be able to achieve comparable horizontal positioning accuracy as approaches supporting fixed TRP pattern, when other design parameters are held the same.

  • Model output of AI/ML assisted positioning. For AI/ML assisted positioning, evaluations are carried out where the model output includes timing information and/or LOS/NLOS indicator, in the format of hard- or soft- value.
  • Non-ideal label in the training dataset. Evaluations are carried out to show the impact of:

o    Label error, where the label in the training dataset is degraded from ground truth label by an error.

       For direct AI/ML positioning and AI/ML assisted positioning with timing information as model output, location error in each dimension of x-axis and y-axis is modelled as a truncated Gaussian distribution.

       For AI/ML assisted positioning where the model output includes the LOS/NLOS indicator, random LOS/NLOS label error is applied.

o    Absent label, where some data samples in the training dataset do not have associated labels. Semi-supervised learning is evaluated for this case.

  • Model monitoring. Preliminary evaluation of model monitoring methods are provided by individual companies. The following methods are shown to be feasible:

o    Label based methods, where ground truth label (or its approximation) is provided for monitoring the accuracy of model output.

o    Label-free methods, where model monitoring does not require ground truth label (or its approximation).

 

Based on RAN1 evaluations of AI/ML based positioning,

·        It is beneficial to support both direct AI/ML and AI/ML assisted positioning approaches since they can significantly improve the positioning accuracy compared to existing RAT-dependent positioning methods in the evaluated indoor factory scenarios.

·        Both UE-side model and NW-side model can significantly improve the positioning accuracy compared to existing RAT-dependent positioning methods.

=======================  End of text proposal to TR 38.843 v1.0.0 ====================

 

 

=====================================================================================

R1-2310416         Summary#1 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Tuesday session

Working Assumption

For AI-based beam management, from RAN1 perspective, at least the following are recommended for normative work

 

 

Final summary in R1-2310417.

 

=====================================================================================

R1-2310442         FL summary #1 for remaining aspects for evaluation of AI/ML for beam management      Moderator (Samsung)

From Tuesday session

Agreement

Adopt the update of the text proposal for TR 38.843:

==== Start of text proposal for TR 38.843 =======

System performance related KPIs, including:

< Unchanged parts are omitted >

-      RS overhead reduction for BM-Case2, when Top-1 and Top-K beam (pairs) are inferred:

< Unchanged parts are omitted >

To calculate the measurement/RS overhead reduction and summarize results for BM-Case 2, at least when Top-1 beam (pair) is inferred:

< Unchanged parts are omitted >

====== end of text proposal for TR 38.843 ======

 

Agreement

Adopt the update of the text proposal for TR 38.843:

==== Start of text proposal for TR 38.843 =======

5.2             Beam management

Finalization of representative sub-use cases:

The following are selected as representative sub-use cases:

-      BM-Case1: Spatial-domain Downlink beam prediction for Set A of beams based on measurement results of Set B of beams

-      Consider: Alt. 1): AI/ML model training and inference at NW side. Alt. 2): AI/ML model training and inference at UE side.

-      Consider: Alt. i): Set A and Set B are different (Set B is NOT a subset of Set A). Alt. ii): Set B is a subset of Set A. Note: Set A is for DL beam prediction and Set B is for DL beam measurement. The codebook construction of Set A and Set B can be clarified by companies.

-      AI/ML model input consider: Alt 1): Only L1-RSRP measurement based on Set B; Alt.2): L1-RSRP measurement based on Set B and assistance information; Alt. 3): CIR based on Set B; Alt. 4): L1-RSRP measurement based on Set B and the corresponding DL Tx and/or Rx beam ID.

-      BM-Case2: Temporal Downlink beam prediction for Set A of beams based on the historic measurement results of Set B of beams

-      Consider: Alt. 1): AI/ML model training and inference at NW side. Alt. 2): AI/ML model training and inference at UE side.

-      Consider: Alt. i): Set A and Set B are different (Set B is NOT a subset of Set A). Alt. ii): Set B is a subset of Set A (Set A and Set B are not the same). Alt. iii): Set A and Set B are the same.

-      AI/ML model input consider: measurement results of K (K≥1) latest measurement instances with the following alternatives: Alt. 1): Only L1-RSRP measurement based on Set B; Alt 2): L1-RSRP measurement based on Set B and assistance information; Alt. 3): L1-RSRP measurement based on Set B and the corresponding DL Tx and/or Rx beam ID.

-      [AI/ML model output]: F predictions for F future time instances can be obtained based on the output of AI/ML model, where each prediction is for each time instance. At least F=1.

Set B is a set of beams whose measurements are taken as inputs of the AI/ML model.

Note: Beams in Set A and Set B can be in the same Frequency Range.

< Unchanged parts are omitted >

The following alternatives for [according to AI/ML model output] are definedconsidered:

-      Alt.1: Tx and/or Rx Beam ID(s) and/or the predicted L1-RSRP of the N predicted DL Tx and/or Rx beams

-      e.g., N predicted beams can be the top-N predicted beams

-      Alt.2: Tx and/or Rx Beam ID(s) of the N predicted DL Tx and/or Rx beams and other information

-      e.g., N predicted beams can be the top-N predicted beams

-      Alt.3: Tx and/or Rx Beam angle(s) and/or the predicted L1-RSRP of the N predicted DL Tx and/or Rx beams

-      e.g., N predicted beams can be the top-N predicted beams

Notes: It is up to companies to provide other alternative(s). Beam ID is only used for discussion purposes. All the outputs are “nominal” and only for discussion purpose. Values of N is up to each company. All of the outputs in the above alternatives may vary based on whether the AI/ML model inference is at UE side or gNB side. The Top-N beam IDs might have been derived via post-processing of the ML-model output.

< Unchanged parts are omitted >

====== end of text proposal for TR 38.843 ======

 

Agreement

Adopt the update of the text proposal for TR 38.843:

==== Start of text proposal for TR 38.843 =======

6.3.2         Performance results

BM_Table 1 through BM_Table 5 in attached Spreadsheets for Beam Management evaluations present the performance results for:

-        BM_Table 1: Evaluation results for BMCase-1 without generalization

-        BM_Table 2: Evaluation results for BMCase-2 without generalization

-        BM_Table 3: Evaluation results for BMCase-1 with generalization for DL Tx beam prediction

-        BM_Table 4. Evaluation results for BMCase-1 with generalization for beam pair prediction

-        BM_Table 5. Evaluation results for BMCase-2 with generalization for DL Tx beam and beam pair prediction

In the following performance results, Top-K/1(%) is used for Top-K DL Tx beam prediction accuracy or Top-K beam pair prediction accuracy.

< Unchanged parts are omitted >

====== end of text proposal for TR 38.843 ======

 

 

R1-2310443         FL summary #2 for remaining aspects for evaluation of AI/ML for beam management      Moderator (Samsung)

From Wednesday session

Conclusion

For all five positioning cases (Case 1/2a/2b/3a/3b), RAN1 has not considered prioritization.

 

Observation

For BM-Case1 when Set B is a subset of Set A or when Set B is different than Set A, without UE rotation, AI/ML can achieve good performance with measurements of fixed Set B that is 1/4 or 1/8 of Set A of beam measured with best Rx beam for DL Tx beam prediction, and with measurements of fixed Set B that is 1/4 or 1/8 or 1/16 of Set A for beam pair prediction. In addition, based on the evaluation results from 2 or 3 sources, for BM-Case1 DL Tx beam prediction, with 1/4 or 1/8 measurement/RS overhead, 96%~99% or 85%~98% of UE average throughput and 95%~97% or 70%~84% of UE 5%ile throughput of non-AI baseline option 1 (exhaustive search over Set A beams) can be achieved according to the predicted beam from AI/ML. Note that, ideal measurements are assumed in the evaluations (in section 6.3.2.1): beam could be measured regardless their SNR, no measurement error, and measurements obtained in a single-time instance (within a channel-coherence time interval), no quantization and no constraint on UCI payload (for NW-side model).

With some realistic consideration (in section 6.3.2.3): 

In addition, comparing with fixed Set B (Opt 1), in case of with Set B changed among pre-configured patterns (Opt 2B), some performance degradation (e.g., no more than or about 10% Top-1 beam prediction accuracy loss based on most of results) is observed; in case of with Set B randomly changed in Set A of beams (Opt 2C), large degradation (e.g, 20%~50% Top-1 beam prediction accuracy loss based on most of results) is observed. With reduced number of measurements of a fixed set of beams (Set C) as inputs of AI/ML (Opt 2D), some performance degradation (e.g., <10% Top-1 beam prediction accuracy loss based on most of results) is observed, comparing with using all measurements from Set C, in the meanwhile, UCI reporting overhead for inference inputs can be reduced (e.g., 1/2 to 7/8 UCI reporting overhead reduction) comparing with reporting all measurements of the fixed beam Set C.

Moreover, the performance with different label options has been evaluated which may lead to different data collection overhead for training (for both BM-Case1 and BM-Case2).

 

Observation

Evaluation results for BM-Case2 when Set B= Set A for DL Tx beam prediction with the measurements from the best Rx beam and beam pair prediction are summarized in Table AA and Table BB, without considering generalization aspects.

Table AA: Summary of the evaluation results for BM-Case2 when Set B=Set A for DL Tx beam prediction

 

Without rotation

With rotation

Beam prediction accuracy performance comparing with non-AI baseline (option 2)

For 80ms or 160ms prediction time:

·        Some evaluation results show AI/ML may have similar performance or some degradation

For 160ms or larger prediction time:

·        Most of evaluation results show AI/ML provides some beam prediction accuracy gain

·        For the longer the prediction time, the higher gain of beam prediction accuracy can be achieved by AI/ML.

AI/ML can provide some beam prediction accuracy gain:

For the longer the prediction time, the higher gain of beam prediction accuracy can be achieved by AI/ML

 


(2 sources)

RS overhead Case A, comparing with non-AI baseline (option 1)

AI/ML can achieve decent beam prediction accuracy with 1/5~1/2 measurement/RS overhead reduction

NA

 

 

 

RS overhead Case B, comparing with non-AI baseline (option 2) with given prediction accuracy

AI/ML can achieve a certain beam prediction accuracy with 7/10 measurement/RS overhead reduction

(1 source)

AI/ML can achieve a certain beam prediction accuracy with 1/2 measurement/RS overhead reduction

(1 source)

RS overhead Case B+, comparing with non-AI baseline (option 1)

AI/ML can achieve good beam prediction with 80% measurement/RS overhead reduction

(1 source)

AI/ML can achieve good beam prediction with more than 80% measurement/RS overhead reduction

(1 source)

 

Table BB: Summary of the evaluation results for BM-Case2 when Set B=Set A for beam pair prediction

 

Without rotation

With rotation

Beam prediction accuracy performance comparing with non-AI baseline (option 2)

For 160ms or less prediction time

·        AI/ML may or may not provide beam prediction accuracy gain

For the longer the prediction time,

·        the higher gain of beam prediction accuracy can be achieved by AI/ML.

AI/ML may or may not provide beam prediction accuracy gain comparing
(3 sources)

RS overhead Case A, comparing with non-AI baseline (option 1)

AI/ML can provide good beam prediction accuracy with the less measurements/RS overhead (up to 1/2)

 

NA

 

RS overhead Case B, comparing with non-AI baseline (option 2) with given prediction accuracy

AI/ML can achieve a certain beam prediction accuracy with 1/2 or 3/5 measurement/RS overhead reduction

(2 source)

NA

RS overhead Case B+, comparing with non-AI baseline (option 1)

AI/ML can achieve good beam prediction accuracy with 80% measurement/RS overhead reduction

(1 source)

NA

 

For BM-Case2 when Set B is a subset of Set A for DL Tx beam prediction with the measurements from the best Rx beam, without considering generalization aspects, AI/ML can achieve good prediction accuracy with 1/2, 1/3, 1/4, 1/8 RS overhead in spatial domain, for the case Set B is fixed or variable with pre-configured patterns of beams with or without UE rotation. More RS/measurements overhead reduction can be achieved considering overhead reduction in time domain.

For BM-Case2 when Set B is a subset of Set A for beam pair prediction, without considering generalization aspects

Note that, ideal measurements are assumed in the above evaluations (for BM-Case2): beam could be measured regardless their SNR, no measurement error, no quantization and no constraint on UCI payload (for NW-side model). With measurement error, quantization or measurements results from quasi-optimal Rx beam for DL Tx beam prediction, similar observations are observed (for some cases) or expected as for BM-Case1.

 

 

R1-2310656         FL summary #3 for remaining aspects for evaluation of AI/ML for beam management      Moderator (Samsung)

From Friday session

Agreement

·       Adopt the update of the text proposal for TR 38.843:

==== Start of text proposal for TR 38.843 =======

6.3.2         Performance results

< Unchanged parts are omitted >

Figure 6.3.2-1 and Table 6.3.2-1 illustrate model parameter (M) and computational complexity in Flops (M) for BM-Case 1 and BM-Case 2, Tx beam prediction and beam pair prediction respectively, according to the reported assumption in BM_Table 1 and BM_Table 2.

Note: Optimization of AI/ML model (e.g., in terms of model/computational complexity) was not discussed in the study.

Figure 6.3.2-1

 

Table 6.3.2-1 AI/ML model complexity/computation complexity used in the evaluations for AI/ML in beam management

 

Model complexity
in a number of model parameters

Model complexity
in a number of model size

Computational complexity (FLOPs)

BM-Case 1 DL Tx beam

more than 1k to 4.9M

majority reported less than 1M or about 1M

50Kbytes to 20Mbytes majority reported less than 0.1Mbytes ~ 0.6Mbytes

~2.7K to 222M

majority reported less than 1M or 10s M

BM-Case 1 DL beam pair

72K to 4.9M

majority reported less than 0.1s M ~ 1M

0.17Mbytes to 21Mbytes majority reported less than 1Mbytes ~ 4Mbytes

15K to 224M

majority reported less than 1M ~ 4 M

BM-Case 2 DL Tx beam

35K to 11M

majority reported less than 0.1s M ~ 1M

0.5Mbytes to 15Mbytes

majority reported about 1s Mbytes

~90K to 54M

majority reported less than 0.1s M or 1s M

BM-Case 2 DL beam pair

20K to 13M

majority reported about 0.1M ~ 1M

0.08M to 15M

majority reported about 1Mbytes

~90K to 443M

majority reported less than 0.4 M or 1s M

< Unchanged parts are omitted >

====== end of text proposal for TR 38.843 ======

 

Observation

For both BM-Case1 and BM-Case2 when Set B is a subset of or different from Set A, a certain RS/measurement overhead is assumed to summarize the evaluation results for Top-1(%) beam prediction accuracy. With additional measurements among predicted Top-K beam (pairs) (i.e., with additional RS/measurement overhead), Top-1 beam (pair) can be obtained by finding a best beam (pair) among the K predicted beams (pairs) with the beam prediction accuracy of Top-K/1(%) if no genie-aid Top-1 beam change out of the K predicted beam (pairs) during the additional measurements.  

Note: This is to explain the potential implications and relations of Top-1(%) and Top-K/1(%) beam prediction accuracy metrics defined in evaluations agenda item with regards to RS overhead and additional measurement. The corresponding specification impact is a separate discussion.

 

Observation

Reduced measurement overhead can reduce measurement latency for beam prediction in some configurations.

 

 

Final summary in R1-2310657.

 

=====================================================================================

R1-2310449         Summary#1 for CSI evaluation of [114bis-R18-AI/ML]              Moderator (Huawei)

From Tuesday session

Agreement

·       Adopt the following TP to TR 38.843 to describe the procedure of inference for CSI compression

------------------ Text Proposal for 38.843 v1.0.0 Clause 6.2.1 ------------------

6.2.1         Evaluation assumptions, methodology and KPIs

*** Unchanged text is omitted ***

CSI compression sub use case specific aspects:

The following figure provides an example for the inference procedure for CSI compression. For generating the input of CSI generation model, it may need some further pre-processing on the measured channel; for the output of the CSI reconstruction model, some further post-processing may also be applied. Besides CSI feedback of quantization output, there may also be other CSI/PMI related information transmitted. There may be other examples of merging quantization/dequantization into the inference for CSI generation/reconstruction, CSI generation model/CSI reconstruction model, respectively.

Figure X An example of the CSI compression inference procedure.

For the evaluation of the AI/ML based CSI compression sub use case, companies are encouraged to report details of their models, including:

*** Unchanged text is omitted ***

 

Agreement

·       Adopt the following TP to TR 38.843 to describe the procedure of inference for CSI prediction.

------------------ Text Proposal for 38.843 v1.0.0 Clause 6.2.1 ------------------

6.2.1         Evaluation assumptions, methodology and KPIs

*** Unchanged text is omitted ***

CSI prediction sub use case specific aspects:

The following figure provides an example for the inference procedure for CSI prediction. For generating the input of CSI prediction model, it may need some further pre-processing on the measured channel; for the output of the CSI prediction model, some further post-processing may also be applied.

Figure X An example of the CSI prediction inference procedure.

For the evaluation of the AI/ML based CSI prediction sub use case, companies are encouraged to report details of their models, including:

*** Unchanged text is omitted ***

 

 

R1-2310450         Summary#2 for CSI evaluation of [114bis-R18-AI/ML]              Moderator (Huawei)

From Wednesday session

Agreement

·       Adopt the following TP to TR 38.843 to capture the complexity results for CSI compression and CSI prediction.

------------------ Text Proposal for 38.843 v1.0.0 Clause 6.1------------------

6.1             Common evaluation methodology and KPIs

*** Unchanged text is omitted ***

Common KPIs (if applicable):

-      Performance

-          Intermediate KPIs

-          Link and system level performance

-          Generalization performance

-      Over-the-air Overhead

-          Overhead of assistance information

-          Overhead of data collection

-          Overhead of model delivery/transfer

-          Overhead of other AI/ML-related signalling

-      Inference complexity, including complexity for pre- and post-processing

-          Computational complexity of model inference: TOPs, FLOPs, MACs

- there may be a disconnection between actual complexity and the complexity evaluated as captured in Section 6 using these KPIs due to the platform-dependency and implementation (hardware and software) optimization solutions

-          Computational complexity for pre- and post-processing

*** Unchanged text is omitted ***

 

------------------ Text Proposal for 38.843 v1.1.0 Clause 6.2.2 ------------------

6.2.2         Performance results

*** Unchanged text is omitted ***

Observations:

CSI compression

For the evaluation of CSI compression, for the type of AI/ML model input (for CSI generation part)/output (for CSI reconstruction part), a vast majority of companies adopt precoding matrix as model input/output.

Note: For the evaluations of CSI compression with 1-on-1 joint training, 22 sources take precoding matrix without angular-delay domain conversion as the model input/output; 2 sources take precoding matrix with angular-delay domain representation as the model input/output. No company submitted explicit channel matrix as input.

The complexity metric in terms of FLOPs and number of parameters of AI/ML models adopted in the evaluations of CSI compression with summarized in the following Max rank 1 are figure, where the complexity for the CSI generation part and the complexity for the CSI reconstruction part are illustrated separately.

-  A majority of 25 sources adopt the CSI generation model subject to the FLOPs from 10M to 800M, and 26 sources adopt the CSI reconstruction model subject to the FLOPs from 10M to 1100M.

-  A majority of 21 sources adopt the CSI generation model subject to the number of parameters from 1M to 13M, and 22 sources adopt the CSI reconstruction model subject to the FLOPs from 1M to 17M.

-  Results refer to Table 1 of Section 7.3, R1-2310450.

A graph with red and blue dots

Description automatically generated

Figure X Complexity of AI/ML models from evaluation results in terms of FLOPs and number of parameters for CSI compression.

For the evaluation of the AI/ML based CSI compression sub use case, companies are encouraged to report details of their models, including:

*** Unchanged text is omitted ***

CSI Prediction

The complexity values in terms of FLOPs and number of parameters of AI/ML models adopted in the evaluations of CSI prediction are summarized in the following figure.

-  Results refer to Table 2 of Section 7.3, R1-2310450.

A graph with blue dots

Description automatically generated

Figure X Complexity of AI/ML models from evaluation results in terms of FLOPs and number of parameters for CSI prediction.

For the AI/ML based CSI prediction, compared with the benchmark of the nearest historical CSI:

*** Unchanged text is omitted ***

 

Agreement

·       Adopt the following TP related with changes to the EVM table to TR 38.843.

------------------ Text Proposal for 38.843 v1.0.0 ------------------

6.2.1         Evaluation assumptions, methodology and KPIs

*** Unchanged text is omitted ***

For calibration purposes on the dataset and/or AI/ML model across companies, companies were encouraged to align the parameters (e.g., for scenarios/channels) for generating the dataset in the simulation as a starting point.

For the evaluation of the AI/ML based CSI feedback enhancement, for ‘Channel estimation’, ideal DL channel estimation is optionally taken into the baseline of evaluation methodology for the purpose of calibration and/or comparing intermediate results (e.g., accuracy of AI/ML output CSI, etc.). Up to companies to report whether/how ideal channel is used in the dataset construction and performance evaluation/inference.

Note: Eventual performance comparison with the benchmark release and drawing SI conclusions should be based on realistic DL channel estimation.

*** Unchanged text is omitted ***

Table 6.2.1-1: Baseline System Level Simulation assumptions for AI/ML based CSI feedback enhancement evaluations

15       Parameter

16       Value

*** Unchanged text is omitted ***

CSI feedback

Feedback assumption at least for baseline scheme

- CSI feedback periodicity (full CSI feedback): 5 ms (baseline)

- Scheduling delay (from CSI feedback to time to apply in scheduling): 4 ms

Overhead

1                Companies shall provide the downlink overhead assumption (i.e., whether the CSI-RS transmission is UE-specific or not and take that into account for overhead computation)

Traffic model

2                At least, FTP model 1 with packet size 0.5 Mbytes is assumed.

3                Other options are not precluded

Traffic load (Resource utilization)

4                20/50/70%. Companies are encouraged to report the MU-MIMO utilization. 

UE distribution

CSI compression: 80% indoor (3 km/h), 20% outdoor (30 km/h)

CSI prediction: 100% outdoor (10, 20, 30, 60, 120 km/h) including outdoor-to-indoor car penetration loss per TR 38.901 if the simulation assumes UEs inside vehicles. No explicit trajectory modeling considered for evaluations.

UE receiver

5                MMSE-IRC as the baseline receiver

Feedback assumption

6                Realistic

Channel estimation         

Realistic as a baseline. Up to companies to choose the error modelling method for realistic channel estimation.

Ideal DL channel estimation is optionally taken into the baseline of evaluation methodology for the purpose of calibration and/or comparing intermediate results (e.g., accuracy of AI/ML output CSI, etc.). Up to companies to report whether/how ideal channel is used in the dataset construction and performance evaluation/inference.

Note: Eventual performance comparison with the benchmark release and drawing SI conclusions should be based on realistic DL channel estimation.

7                FFS ideal channel estimation

*** Unchanged text is omitted ***

Baseline for performance evaluation

For CSI compression:

Companies need to report which option is used between:

- Rel-16 TypeII Codebook as the baseline for performance and overhead evaluation.

- Rel-17 TypeII Codebook as the baseline for performance and overhead evaluation.

 

Additional assumptions from R17 TypeII EVM: Same consideration with respect to utilizing angle-delay reciprocity should be considered taken for the AI/ML based CSI feedback and the baseline scheme if R17 TypeII codebook is selected as baseline.

 

Optionally, Type I Codebook (if it outperforms Type II Codebook) can be considered for comparing AI/ML schemes.

 

For CSI-prediction:

Both of the followings are taken as baselines

Companies need to report which option is used between:

-        The nearest historical CSI without prediction

-        Non-AI/ML or AI/ML with collaboration Level x based CSI prediction for which corresponding details would need to be reported

Note: the specific non-AI/ML based CSI prediction is compatible with R18 MIMO; collaboration level x AI/ML based CSI prediction could be implementation based AI/ML compatible with R18 MIMO as an example.

 

For the evaluation of CSI enhancements, companies can optionally provide the additional throughput baseline based on CSI without compression (e.g., eigenvector from measured channel), which is taken as an upper bound for performance comparison.

 

Note:            the baseline EVM is used to compare the performance with the benchmark release, while the AI/ML related parameters (e.g., dataset construction, generalization verification, and AI/ML related metrics) can be of additional/different assumptions. The conclusions for the use cases in the SI should be drawn based on generalization verification over potentially multiple scenarios/configurations.

Table 6.2.1-2 presents the baseline link level simulation assumptions for AI/ML based CSI feedback enhancement evaluations.

Table 6.2.1-2: Baseline Link Level Simulation assumptions for AI/ML based CSI feedback enhancement evaluations

17       Parameter

18       Value

Duplex, Waveform

8                FDD (TDD is not precluded), OFDM

Carrier frequency

9                2GHz as baseline, optional for 4GHz

Bandwidth

10             10MHz or 20MHz

Subcarrier spacing

11             15kHz for 2GHz, 30kHz for 4GHz

Nt

12             32: (8,8,2,1,1,2,8), (dH,dV) = (0.5, 0.8)λ

Nr

13             4: (1,2,2,1,1,1,2), (dH,dV) = (0.5, 0.5)λ

Channel model

14             CDL-C as baseline, CDL-A as optional

UE speed

15             3kmhr, 10km/h, 20km/h or 30km/h to be reported by companies

Delay spread

16             30ns or 300ns

Channel estimation

17             Realistic channel estimation algorithms (e.g., LS or MMSE) as a baseline, FFS ideal channel estimation

Ideal DL channel estimation is optionally taken into the baseline of evaluation methodology for the purpose of calibration and/or comparing intermediate results (e.g., accuracy of AI/ML output CSI, etc.). Up to companies to report whether/how ideal channel is used in the dataset construction and performance evaluation/inference.

Note: Eventual performance comparison with the benchmark release and drawing SI conclusions should be based on realistic DL channel estimation.

Rank per UE

18             Rank 1-4. Companies are encouraged to report the Rank number, and whether/how rank adaptation is applied

Note:            the baseline EVM is used to compare the performance with the benchmark release, while the AI/ML related parameters (e.g., dataset construction, generalization verification, and AI/ML related metrics) can be of additional/different assumptions. The conclusions for the use cases in the SI should be drawn based on generalization verification over potentially multiple scenarios/configurations.

*** Unchanged text is omitted ***

 

Agreement

·       Adopt the following TP related with changes to the KPI part to TR 38.843.

------------------ Text Proposal for 38.843 v1.0.0 ------------------

6.2.1         Evaluation assumptions, methodology and KPIs

*** Unchanged text is omitted ***

KPIs and Evaluation metrics:

-      Capability/complexity: Floating point operations (FLOPs), AI/ML model size, number of AI/ML parameters AI/ML memory storage in terms of AI/ML model size and number of AI/ML parameters reported by companies who may select either or both

-      Reported separately for the CSI generation part and the CSI reconstruction part (for CSI compression sub-use case)

-      When reporting the computational complexity including the pre-processing and post-processing, the complexity metric of FLOPs may be reported separately for the AI/ML model and the pre/post processing. While reporting the FLOPs of pre-processing and post-processing the following boundaries are considered:

-      Estimated raw channel matrix per each frequency unit as an input for pre-processing of the CSI generation part.

-      Precoding vectors per each frequency unit as an output of post-processing of the CSI reconstruction part.

-      AI/ML memory storage in terms of AI/ML model size and number of AI/ML parameters is adopted as part of the ‘Evaluation Metric’, and reported by companies who may select either or both.

*** Unchanged text is omitted ***

-      CSI compression: Intermediate KPI: monitoring mechanism considered as:

*** Unchanged text is omitted ***

-      Step 2: For each of the K test samples, a bias factor of monitored intermediate KPI (KPIDiff) is calculated as a function of KPIDiff = f ( KPIActual , KPIGenie ), where KPIActual is the actual intermediate KPI, and KPIGenie is the genie-aided intermediate KPI.

*** Unchanged text is omitted ***

-      KPIDiff = f ( KPIActual , KPIGenie ) can take the following forms:

-      Option 1 (baseline for calibration): Gap between KPIActual and KPIGenie, i.e. KPIDiff = (KPIActual - KPIGenie); Monitoring accuracy is the percentage of samples for which | KPIDiff| < KPIth 1, where KPIth 1 is a threshold of the intermediate KPI gap which can take the following values: 0.02, 0.05 and 0.1.

-      Option 2 (optional and up to companies to report): Binary state where KPIActual and KPIGenie, have different relationships to their threshold(s), i.e., KPIDiff = (KPIActual > KPIth 2, KPIGenie > < KPIth 3) OR (KPIActual < KPIth 2, KPIGenie < > KPIth 3), where KPIth 2 is considered to be the same as KPIth 3. Monitoring accuracy is the percentage of samples for which KPIDiff = 0.

*** Unchanged text is omitted ***

 

Agreement

·       Adopt the following TP related with changes to the model generalization part to TR 38.843.

------------------ Text Proposal for 38.843 v1.0.0 ------------------

6.2.1         Evaluation assumptions, methodology and KPIs

*** Unchanged text is omitted ***

Model generalization:

In order to study the verification of generalization, the following aspects are encouraged to be reported:

-          The configuration(s)/scenario(s) for training dataset, including potentially the mixed training dataset from multiple configurations/scenarios

-          The configuration(s)/scenario(s) for testing/inference

-          The detailed list of configuration(s) and/or scenario(s)

The following cases are considered for verifying the generalization performance of an AI/ML model over various scenarios/configurations:

*** Unchanged text is omitted ***

To verify the generalization/scalability performance of an AI/ML model over various configurations (e.g., which may potentially lead to different dimensions of model input/output), the set of configurations are considered focusing on one or more of the following aspects:

-          Various bandwidths (e.g., 10MHz, 20MHz) and/or frequency granularities, (e.g., size of subband)

-          Various sizes of CSI feedback payloads, FFS candidate payload number

-          Various antenna port layouts, e.g., (N1/N2/P) and/or antenna port numbers (e.g., 32 ports, 16 ports)

*** Unchanged text is omitted ***

For CSI compression, to achieve the scalability over different output dimensions of CSI generation part (e.g., different generated CSI feedback dimensions), the generalization cases of are elaborated as follows

-          Case 1: The AI/ML model is trained based on training dataset from a fixed output dimension Y1 (e.g., a fixed CSI feedback dimension), and then the AI/ML model performs inference/test on a dataset from the same output dimension Y1.

-          Case 2: The AI/ML model is trained based on training dataset from a single output dimension Y1, and then the AI/ML model performs inference/test on a dataset from a different output dimension Y2.

-          Case 3: The AI/ML model is trained based on training dataset by mixing datasets subject to multiple dimensions of Y1, Y2,..., Yn, and then the AI/ML model performs inference/test on a single dataset of Y1, or Y2,…, or Yn.

-          Notes: For Case 1/2/3, companies to report whether the output of the CSI generation part is before quantization or after quantization. For Case 2/3, the solutions to achieve the scalability between Yi and Yj, are reported by companies, including, e.g., truncation, additional adaptation layer in AI/ML model, etc.

Model Fine-tuning:

For the evaluation of the potential performance benefits of model fine-tuning of CSI feedback enhancement, which is optionally assessed, the following case is considered:

-          The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model is updated based on a fine-tuning dataset different than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B. After that, the AI/ML model is tested on a different dataset than Scenario#A/Configuration#A, e.g., subject to Scenario#B/Configuration#B, Scenario#A/Configuration#B.

-          In this case, the fine-tuning dataset setting (e.g., size of dataset) is to be reported along with the improvement of performance.

Further details on evaluations including training collaboration types

*** Unchanged text is omitted ***

For SLS, spatial consistency Procedure A with 50m decorrelation distance from TR 38.901 is used (if not used, assumptions used need to be reported). UE velocity vector is assumed as fixed over time in Procedure A modelling.

 

Model Fine-tuning:

For the evaluation of the potential performance benefits of model fine-tuning of CSI feedback enhancement, which is optionally assessed, the following case is considered:

-          The AI/ML model is trained based on training dataset from one Scenario#A/Configuration#A, and then the AI/ML model is updated based on a fine-tuning dataset different than Scenario#A/Configuration#A, e.g., Scenario#B/Configuration#B, Scenario#A/Configuration#B. After that, the AI/ML model is tested on a different dataset than Scenario#A/Configuration#A, e.g., subject to Scenario#B/Configuration#B, Scenario#A/Configuration#B.

-          In this case, the fine-tuning dataset setting (e.g., size of dataset) is to be reported along with the improvement of performance.

*** Unchanged text is omitted ***

 

Agreement

·       Adopt the following TP related with changes to the training collaboration types part to TR 38.843.

------------------ Text Proposal for 38.843 v1.0.0 ------------------

6.2.1         Evaluation assumptions, methodology and KPIs

*** Unchanged text is omitted ***

Further details on evaluations including training collaboration types

For the evaluation of the AI/ML based CSI compression sub use cases, a two-sided model is considered as a starting point, including an AI/ML-based CSI generation part to generate the CSI feedback information and an AI/ML-based CSI reconstruction part which is used to reconstruct the CSI from the received CSI feedback information. At least for inference, the CSI generation part is located at the UE side, and the CSI reconstruction part is located at the gNB side.

For the evaluation of Type 2 (Joint training of the two-sided model at network side and UE side, respectively), following procedure is considered as an example:

*** Unchanged text is omitted ***

For the evaluation of an example of Type 3 (Separate training at NW side and UE side), the following procedure is considered for the sequential training starting with NW side training (NW-first training):

-          Step1: NW side trains the NW side CSI generation part (which is not used for inference) and the NW side CSI reconstruction part jointly

-          Step2: After NW side training is finished, NW side shares UE side with a set of information (e.g., dataset) that is used by the UE side to be able to train the UE side CSI generation part

-      Companies to report Dataset construction, e.g., the set of information includes the input and output of the Network side CSI generation part, or includes the output of the Network side CSI generation part only, or other information if applicable. Also report the Quantization behaviour, e.g., whether the shared output of the Network side CSI generation part is before or after quantization.

-          Step3: UE side trains the UE side CSI generation part based on the received set of information

-          Other Type 3 NW-first training approaches are not precluded

For the evaluation of an example of Type 3 (Separate training at NW side and UE side), the following procedure is considered for the sequential training starting with UE side training (UE-first training):

-          Step1: UE side trains the UE side CSI generation part and the UE side CSI reconstruction part (which is not used for inference) jointly

-          Step2: After UE side training is finished, UE side shares NW side with a set of information (e.g., dataset) that is used by the NW side to be able to train the CSI reconstruction part

-      Companies to report Dataset construction, e.g., the set of information includes the input and label of the UE side CSI reconstruction part, or includes the input of the UE side CSI reconstruction part only, or other information if applicable. Also, report the Quantization behaviour, e.g., whether the shared input of the UE side CSI reconstruction part is before or after quantization.

-          Step3: NW side trains the NW side CSI reconstruction part based on the received set of information

-          Other Type 3 UE-first training approaches are not precluded

*** Unchanged text is omitted ***

For the evaluation of an example of Type 3 (Separate training at NW side and UE side), the following evaluation cases for sequential training are considered for multi-vendors:

*** Unchanged text is omitted ***

-          Case 2: For UE-first training, Type 3 training between one NW part model and M>1 separate UE part models

-      Note: Case 2 can be also applied to the M>1 UE part models to N>1 NW part models

-      Companies to report the AI/ML structures for the M>1 UE part models and the NW part model

-      Companies to report the dataset used at UE part models, e.g., same or different dataset(s) among M UE part models

-      Companies to report Dataset construction, e.g., the set of information includes the input and label of the UE side CSI reconstruction part, or includes the input of the UE side CSI reconstruction part only, or other information if applicable. Also, report the Quantization behaviour, e.g., whether the shared input of the UE side CSI reconstruction part is before or after quantization.

-          Case 3: For NW-first training, Type 3 training between one UE part model and N>1 separate NW part models

-      Note: Case 3 can be also applied to the N>1 NW part models to M>1 UE part models

-      Companies to report the AI/ML structures for the UE part model and the N>1 NW part models

-      Companies to report the dataset used at NW part models, e.g., same or different dataset(s) among N NW part models

-      Companies to report Dataset construction, e.g., the set of information includes the input and output of the Network side CSI generation part, or includes the output of the Network side CSI generation part only, or other information if applicable. Also report the Quantization behaviour, e.g., whether the shared output of the Network side CSI generation part is before or after quantization.

-          Case 4: 1-on-1 training with joint training: benchmark/upper bound for performance comparison.

*** Unchanged text is omitted ***

CSI compression sub use case specific aspects:

For the evaluation of the AI/ML based CSI compression sub use cases, a two-sided model is considered as a starting point, including an AI/ML-based CSI generation part to generate the CSI feedback information and an AI/ML-based CSI reconstruction part which is used to reconstruct the CSI from the received CSI feedback information. At least for inference, the CSI generation part is located at the UE side, and the CSI reconstruction part is located at the gNB side.

For the evaluation of the AI/ML based CSI compression sub use case, companies are encouraged to report details of their models, including:

*** Unchanged text is omitted ***

 

Agreement

·       Adopt the following TP related with changes to the CSI compression sub use case specific aspects to TR 38.843.

------------------ Text Proposal for 38.843 v1.0.0 ------------------

6.2.1         Evaluation assumptions, methodology and KPIs

*** Unchanged text is omitted ***

For the evaluation of the AI/ML based CSI compression sub use case, companies are encouraged to report details of their models, including:

-          The structure of the AI/ML model, e.g., type (CNN, RNN, Transformer, Inception, …), the number of layers, branches, real valued or complex valued parameters, etc.

-          AI/ML model input (for CSI generation part)/output (for CSI reconstruction part) types for evaluations:

-      Raw channel matrix (in frequency or delay domain), e.g., channel matrix with dimensions of Tx, Rx, and frequency unit

-      Precoding matrix (as a group of eigenvectors or an eTypeII-like reporting)

-          Data pre-processing/post-processing

-          Loss function

-          Specific quantization/dequantization method, e.g., vector quantization, scalar quantization, etc, considering the following aspects:

For the evaluation of the AI/ML based CSI compression sub use cases, at least the following types of AI/ML model input (for CSI generation part)/output (for CSI reconstruction part) are considered for evaluations:

-          Raw channel matrix, e.g., channel matrix with the dimensions of Tx, Rx, and frequency unit. Companies to report the raw channel is in frequency domain or delay domain.

-          Precoding matrix. Companies to report the precoding matrix is a group of eigenvector(s) or an eType II-like reporting (i.e., eigenvectors with angular-delay domain representation).

For the evaluation of quantization aware/non-aware training, the following cases are considered and reported by companies

-      Case 1: Quantization non-aware training, where the float-format variables are directly passed from CSI generation part to CSI reconstruction part during the training

-      Fixed/pre-configured quantization method/parameters is applied for the inference phase. Companies to report the design of the fixed/pre-configured quantization method/parameters, e.g., quantization resolution, vector quantization codebook, etc

-      Case 2: Quantization-aware training, where quantization/dequantization is involved in the training process

-      Case 2-1: Fixed/pre-configured quantization method/parameters are applied during the training phase; the same quantization codebook is applied for the inference phase. Companies to report the design of the fixed/pre-configured quantization method/parameters, e.g., quantization resolution, vector quantization codebook, etc.

-      Case 2-2: The quantization method/parameters are updated in together with the AI/ML models during the training; when training is finished, the final quantization codebook is applied for the inference phase. Companies to report how to update the quantization method/parameters during the training

-      Quantization methods including uniform vs non-uniform quantization, scalar versus vector quantization, and associated parameters, e.g., quantization resolution, etc.

-      How to use the quantization methods are reported by companies

For evaluating the performance impact of ground-truth quantization in the CSI compression,

-          Considering performance impact of ground truth quantization in the CSI compression

-      Studying study high resolution quantization methods for ground truth CSI, including at least the following options:

-      High resolution scalar quantization

-      High resolution codebook quantization, e.g., Rel-16 TypeII-like method with new parameters, in which case companies are to report the R16 Type II parameters with specified or new/larger values to achieve higher resolution of the ground-truth CSI labels, e.g., L,, , reference amplitude, differential amplitude, phase, etc

-      Float32 adopted as the baseline/upper-bound for performance comparisons

-      Consider the legacy values of PC6&PC8 for performance comparison

                                                                                     *** Unchanged text is omitted ***                 

6.2.2         Performance results

                                                                                     *** Unchanged text is omitted ***                 

-          Ground-truth CSI quantization method: Float32, i.e., without quantization (baseline/upper-bound for performance comparison)

-      Other high resolution CSI quantization methods can be additionally submitted for comparison, e.g., R16 eType II-like method with new parameters (consider the legacy values of PC6&PC8 as the baseline/lower-bound of performance comparison), scalar quantization, etc.

                                                                                     *** Unchanged text is omitted ***                 

 

 

R1-2310451         Summary#3 for CSI evaluation of [114bis-R18-AI/ML]              Moderator (Huawei)

From Friday session

Agreement

·       Adopt the following TP related with changes to “Summary of Performance Results for CSI feedback enhancement” in TR 38.843, Section 6.2.2.

------------------ Text Proposal for 38.843 v1.0.0 ------------------

6.2.2         Performance results

*** Unchanged text is omitted ***

6.2.2.8      Summary of Performance Results for CSI feedback enhancement

The following aspects have been studied for the evaluation on AI/ML based CSI compression in Rel-18:

·        From the perspective of basic performance gain over non-AI/ML benchmark (assuming 1 on 1 joint training without considering generalization),

n   It has been studied with corresponding observations on:

u  the metrics of SGCS, mean UPT, 5% UPT, CSI feedback overhead reduction

u  the benchmark of R16 Type II codebook

n   It has been studied but is lack of observations on:

u  the metric of NMSE

u  the benchmarks of Type I codebook and R17 Type II codebook

n   It has been studied with corresponding observations on complexity but without comparison with non-AI/ML.

·        From the perspective of AI/ML solutions (assuming 1 on 1 joint training without considering generalization),

n   It has been studied with corresponding observations on: model input/output type, monitoring for intermediate KPI (including NW side monitoring and UE side monitoring), quantization methods (including quantization awareness for training, and quantization format), and high resolution ground-truth CSI for training, with the metric of SGCS.

n   It has been studied but is lack of observations on: the options of CQI/RI calculation, and the options of rank>1 solution

·        From the perspective of generalization over various scenarios (assuming 1 on 1 joint training),

n   It has been studied with corresponding observations on (with the metric of SGCS):

u  the scenarios including various deployment scenarios, various outdoor/indoor UE distributions, various carrier frequencies, and various TxRU mappings

u  the approach of dataset mixing (generalization Case 3)

n   It has been studied but is lack of observations on:

u  other aspects of scenarios

u  the approach of fine-tuning

·        From the perspective of scalability over various configurations (assuming 1 on 1 joint training),

n   It has been studied with corresponding observations on (with the metric of SGCS):

u  the configurations including various bandwidths/frequency granularities, various CSI feedback payloads, and various antenna port numbers

u  the approach of dataset mixing (generalization Case 3), and the approach of fine-tuning for CSI feedback payloads

u  the scalability solutions

n   It has been studied but is lack of observations on:

u  other aspects of configurations

u  the approach of fine-tuning for configurations other than CSI feedback payloads

·        From the perspective of multi-vendor joint training (without considering generalization),

n   It has been studied with corresponding observations on (with the metric of SGCS):

u  joint training between 1 NW part model and M>1 UE part models, and joint training between 1 UE part model and N>1 NW part models

n   It has been studied but is lack of observations on:

u  joint training between N>1 NW part models and M>1 UE part models

u  performance comparison between simultaneous training and sequential training

·        From the perspective of separate training (without considering generalization),

n   It has been studied with corresponding observations on (with the metric of SGCS):

u  NW first training, including 1 NW part model to 1 UE part model with same backbone and with different backbones, and 1 UE part model to N>1 NW part models

u  UE first training, including 1 NW part model to 1 UE part model with same backbone and with different backbones, and 1 NW part model to M>1 UE part models

u  Impact of shared dataset under 1 NW part model to 1 UE part model for NW first training and UE first training

n   It has been studied but is lack of observations on:

u  the metric of air-interface overhead of information (e.g., dataset) sharing

 

The following aspects have been studied for the evaluation on AI/ML based CSI prediction:

·        From the perspective of basic performance gain over non-AI/ML benchmark (without considering generalization),

n   It has been studied with corresponding observations on:

u  the metrics of SGCS, mean UPT, 5% UPT;

u  the benchmarks of nearest historical CSI and auto-regression/Kalman filter based CSI prediction.

l   Note: the benchmark of level x based CSI prediction is represented by generalization cases.

n   It has been studied but is lack of observations on:

u  the impact of modeling spatial consistency

u  the metrics of NMSE

n   It has been studied with corresponding observations on complexity but without comparison with non-AI/ML

·        From the perspective of AI/ML solutions (without considering generalization),

n   It has been studied with corresponding observations on (with the metric of SGCS and the benchmark of nearest historical CSI): impact of input type, impact of UE speed, impact of prediction window, impact of observation window

·        From the perspective of generalization over various scenarios,

n   It has been studied with corresponding observations on (with the metric of SGCS):

u  the scenario including various UE speeds

u  the approach of dataset mixing (generalization Case 3)

n   It has been studied but is lack of observations on:

u  various deployment scenarios, various carrier frequencies, and other aspects of scenarios.

u  the approach of fine-tuning

·        From the perspective of scalability over various configurations, it has been studied but is lack of observations.

19       *** Unchanged text is omitted ***

 

Agreement

·       Adopt the following TP related with changes to the CSI prediction sub use case specific aspects to TR 38.843.

------------------ Text Proposal for 38.843 v1.0.0 ------------------

6.2.1         Evaluation assumptions, methodology and KPIs

*** Unchanged text is omitted ***

CSI prediction sub use case specific aspects:

For the evaluation of the AI/ML based CSI prediction sub use case, companies are encouraged to report details of their models, including:

-   The structure of the AI/ML model, e.g., type (FCN, RNN, CNN,…), the number of layers, branches, format of parameters, etc.

-   The input CSI type, e.g., raw channel matrix, eigenvector(s) of the raw channel matrix, feedback CSI information, etc.

-          Including assumptions on the observation window, i.e., number/time distance of historic CSI/channel measurements

-   The output CSI type, e.g., channel matrix, eigenvector(s), feedback CSI information, etc.

-          Including assumptions on the prediction window, i.e., number/time distance of predicted CSI/channel

-   Data pre-processing/post-processing

-   Loss function

For the input CSI type, both of the following types are considered for evaluations:

-   Raw channel matrixes.

-   Eigenvector(s).

For SLS, spatial consistency Procedure A with 50m decorrelation distance from TR 38.901 is used (if not used, assumptions used need to be reported). UE velocity vector is assumed as fixed over time in Procedure A modelling.

*** Unchanged text is omitted ***

 

Agreement

·       Adopt the following TP related with changes to the results calibration part to TR 38.843.

------------------ Text Proposal for 38.843 v1.0.0 ------------------

6.2.1         Evaluation assumptions, methodology and KPIs

*** Unchanged text is omitted ***

CSI compression sub use case specific aspects:

*** Unchanged text is omitted ***

For CSI compression sub use case with rank ≥ 1, AI/ML model setting to adapt to ranks/layers to be reported amongst the following options:

*** Unchanged text is omitted ***

-      For CSI compression sub use case with rank >1, the storage of memory storage/number of parameters is reported as the summation of memory storage/number of parameters over all models potentially used for any layer/rank, e.g.,

-      Option 1-1 (rank specific)/Option 3-2 (layer common and rank specific): Sum of memory storage/number of parameters over all rank specific models.

-      Option 1-2 (rank common): A single memory storage/number of parameters for the rank common model.

-      Option 2-1 (layer specific and rank common): Sum of memory storage/number of parameters over all layer specific models.

-      Option 2-2 (layer specific and rank specific): Sum of memory storage/number of parameters for the specific models over all ranks and all layers in per rank.

-      Option 3-1 (layer common and rank common): A single memory storage/number of parameters for the common model

For the evaluation of CSI compression, the specific CQI determination method(s) for AI/ML can be reported by introducing an additional field in the template, e.g.,

-      Option 1a: CQI is calculated based on the target CSI from the realistic channel estimation.

-      Option 1b: CQI is calculated based on the target CSI from the realistic channel estimation and potential adjustment.

-      Option 1c: CQI is calculated based on traditional codebook.

-      Option 2a: CQI is calculated based on CSI reconstruction output, if CSI reconstruction model is available at the UE and UE can perform reconstruction model inference with potential adjustment.

-      Option 2a-1: The CSI reconstruction part for CQI calculation at the UE same as the actual CSI reconstruction part at the NW.

-      Option 2a-2: The CSI reconstruction part for CQI calculation at the UE is a proxy model, which is different from the actual CSI reconstruction part at the NW.

-      Option 2b: CQI is calculated using two stage approach, UE derives CQI using precoded CSI-RS transmitted with a reconstructed precoder.

*** Unchanged text is omitted ***

6.2.2         Performance results

                                                                                     *** Unchanged text is omitted ***

For the evaluation of CSI compression, the specific CQI determination method(s) for AI/ML can be reported by introducing an additional field in the template, e.g.,

-   Option 2a: CQI is calculated based on CSI reconstruction output, if CSI reconstruction model is available at the UE and UE can perform reconstruction model inference with potential adjustment.

-      Option 2a-1: The CSI reconstruction part for CQI calculation at the UE same as the actual CSI reconstruction part at the NW.

-      Option 2a-2: The CSI reconstruction part for CQI calculation at the UE is a proxy model, which is different from the actual CSI reconstruction part at the NW.

-   Option 2b: CQI is calculated using two stage approach, UE derives CQI using precoded CSI-RS transmitted with a reconstructed precoder.

-   Option 1a: CQI is calculated based on the target CSI from the realistic channel estimation.

-   Option 1b: CQI is calculated based on the target CSI from the realistic channel estimation and potential adjustment.

-   Option 1c: CQI is calculated based on traditional codebook.

For the evaluation of CSI compression of 1-on-1 joint training without model generalization/scalability, the The following baselines are recommended to facilitate calibration of results:

-      Benchmark: R16 eType II CB;

-      Others can be additionally submitted, e.g., Type I CB.

-      Input/Output type: Eigenvectors of the current CSI

-      Other can be additionally submitted, e.g., eigenvectors with additional past CSI, eType II-like input, raw channel matrix, etc.

                                                                                     *** Unchanged text is omitted ***

 

Agreement

·       Adopt the following TP related with changes to the observation part to TR 38.843.

------------------ Text Proposal for 38.843 v1.0.0 ------------------

6.2.2         Performance results

                                                                                     *** Unchanged text is omitted ***                 

For the evaluation of AI/ML based CSI compression compared to the benchmark in terms of mean UPT under FTP traffic, more gains are achieved by Max rank 2 compared with Max rank 1 in general:

*** Unchanged text is omitted ***

-        For Max rank 4:

*** Unchanged text is omitted ***

o    For RU≥70%, 3 sources observe the performance gain of -1%~17%

§   3 sources observe the performance gain of 3%~17% at CSI overhead A (small overhead);

§   2 sources observe the performance gain of 6.64%~17% at CSI overhead B (medium overhead);

§   3 sources observe the performance gain of -1%~8.40% at CSI overhead C (large overhead);

o    Note: 1 source observes significant gain or significant loss under Max rank 4 due to specific CQI/RI selection method (e.g., Option 1a/2a) for AI/ML and/or CQI/RI determination method for eType II benchmark.

The above results are based on the following assumptions besides the assumptions of the agreed EVM table:

o    Precoding matrix of the current CSI is used as the model input.

o    Training data samples are not quantized, i.e., Float32 is used/represented.

o    1-on-1 joint training is assumed.

o    The performance metric is mean UPT for Max rank 1, Max rank 2, or Max rank 4.

o    Benchmark is Rel-16 Type II codebook.

o    Note: Results refer to Table 5.12 of R1-2308342 R1-2308340.

*** Unchanged text is omitted ***

For the evaluation of AI/ML based CSI compression compared to the benchmark in terms of 5% UPT under FTP, more gains are achieved by Max rank 2 compared with Max rank 1 in general:

*** Unchanged text is omitted ***

-        For Max rank 4:

*** Unchanged text is omitted ***

o    For RU≥70%, 3 sources observe the performance gain of 2%~31%

§   3 sources observe the performance gain of 5.8%~31% at CSI overhead A (small overhead);

§   2 sources observe the performance gain of 10.2%~30% at CSI overhead B (medium overhead);

§   3 sources observe the performance gain of 2%~15% at CSI overhead C (large overhead);

o    Note: 1 source observes significant gain or significant loss under Max rank 4 due to specific CQI/RI selection method (e.g., Option 1a/2a) for AI/ML and/or CQI/RI determination method for eType II benchmark.

                                                                                     *** Unchanged text is omitted ***

 

For the evaluation of intermediate KPI based monitoring mechanism for CSI compression, for monitoring Case 1, in terms of monitoring accuracy with Option 1,

-        For ground truth CSI format of R16 eType II CB, monitoring accuracy is increased with the increase of the resolution for the ground-truth CSI (number of bits for each sample of ground-truth CSI) in general, with the impact of increased overhead, wherein

o    for ground truth CSI format of R16 eType II CB with PC#6, 4 sources observe KPIDiff KPIDiff as 13.2%~71.6%/ 28.5%~100%/ 68.4%~100% for KPIth_1KPIth_1=0.02/0.05/0.1, respectively.

§   Note: two sources observed averaging on the test samples improves the monitoring accuracy.

o    for ground truth CSI format of R16 eType II CB with PC#8, 5 sources observe KPIDiff KPIDiff as 21%~43.0%/ 48.1%~79.1%/ 79.8%~97.1% for KPIth_1KPIth_1=0.02/0.05/0.1, respectively.

o    for ground truth CSI format of R16 eType II CB with new parameter of 580-750bits CSI payload size, 2 sources observe KPIDiff KPIDiff as 35.4%~63%/ 77.9%~93.0%/ 99.5%~99.9% for KPIth_1KPIth_1=0.02/0.05/0.1, respectively, which have 12.7%~20%/ 13.9%~29.8%/ 8%~31.1% gain over PC#8.

o    for ground truth CSI format of R16 eType II CB with new parameter of around 1000bits CSI payload size, 4 sources observe KPIDiff KPIDiff as 34.9%~89%/ 82.9%~100%/ 99.9%~100% for KPIth_1KPIth_1=0.02/0.05/0.1, respectively, which have 12.2%~68%/ 18%~43.62%/ 2.9%~31% gain over PC#8 from 3 sources and 4.67%~10.6%/ 0%~5.88%/ 0%~0.49% gain over PC#6 from 1 source.

o    for ground truth CSI format of R16 eType II CB with new parameter of around 1600bits CSI payload size, 2 sources observe KPIDiff KPIDiff as 89.1%~97%/ 99.9%~100%/ 100% for KPIth_1KPIth_1=0.02/0.05/0.1, respectively, which have 76%/33%/3% gain over PC#8 from 1 source.

-        for ground truth CSI format of 4 bits scalar quantization, 2 sources observe KPIDiff KPIDiff as 9.4%~47%/ 96.3%~100%/ 100% for KPIth_1KPIth_1=0.02/0.05/0.1, respectively.

                                                                                     *** Unchanged text is omitted ***

For the comparison of quantization methods for CSI compression, quantization non-aware training (Case 1) is in general inferior to the quantization aware training (Case 2-1/2-2), and may lead to lower performance than the benchmark:

-        For scalar quantization, compared with benchmark,

o    -2.4%~-43.2% degradations are observed for  quantization non-aware training (Case 1) from 6 sources.

o    3.9%~8.64% gains are observed for quantization aware training with fixed/pre-configured quantization method/parameters (Case 2-1) from 5 sources, which are 17.3%~83.2% gains over  quantization non-aware training (Case 1) from 5 sources and 7.56%~11.55%  gains over  quantization non-aware training (Case 1) from 1 source.

§   Note: 0.72% gains are observed for Case 2-1 from 1 source due to SQ parameter chosen without matching latent distribution, which achieves 13.9% gains over Case 1.

o    8.91% 7.55% gains are observed for quantization aware training with jointly updated quantization method/parameters (Case 2-2) from 1 source, which are 23.1% gains over  quantization non-aware training (Case 1) from 1 source.

-        For vector quantization, compared with benchmark,

o    -2%~-10% degradations are observed for  quantization non-aware training (Case 1) from 1 source.

o    5.64%~7.55% 8.91% gains are observed for quantization aware training with fixed/pre-configured quantization method/parameters (Case 2-1) from 3 sources, which are 3%~21.6% gains over  quantization non-aware training (Case 1) from 3 sources.

o    4.6%~13.01% gains are observed for quantization aware training with jointly updated quantization method/parameters (Case 2-2) from 7 sources, which are 10.7%~30% gains over  quantization non-aware training (Case 1) from 4 sources and 3.66%~9.8% gains over  quantization non-aware training (Case 1) from 2 sources.

o    In general, Case 2-2 outperforms Case 2-1 with 0.46%~5.1% 3.8% gains, as observed by 6 sources.

*** Unchanged text is omitted ***

For the comparison of quantization methods for CSI compression, in general vector quantization (VQ) has comparable performance with scalar quantization (SQ):

-        For SQ and VQ under the same training case, it is

o    observed by 3 sources that VQ under Case 2-1 has -1%~-4.5% degradation over SQ under Case 2-1,

o    observed by 1 source that VQ under Case 2-1 has 1.1% gain over SQ under Case 2-1, and

o    observed by 3 sources that VQ under Case 2-2 has 0.7%~3.8% 5.1% gain over SQ under Case 2-2.

o    Note: VQ under Case 2-1 has 8% gains over SQ under Case 2-1 as observed from 1 source due to SQ parameter chosen without matching latent distribution.

                                                                                     *** Unchanged text is omitted ***

For the evaluation of NW first separate training with dataset sharing manner for CSI compression for the pairing of 1 NW to 1 UE (Case 1), as compared to 1-on-1 joint training between the NW part model and the UE part model,

-        For the NW first separate training case where the same backbone is adopted for both the NW part model and the UE part model, minor degradation is observed for both the cases where the shared output of the Network side CSI generation part is before or after quantization:

o    For the case where the shared output of the Network side CSI generation part is after quantization, 9 sources observe -0%~-0.5% degradation, 10 sources observe -0.5%~-1% degradation, and 2 sources observe -1%~-1.3% degradation.

o    For the case where the shared output of the Network side CSI generation part is before quantization, 6 sources observe -0%~-0.8% degradation, and 1 source observes -1%~-1.5% degradation.

-        Note: the dataset sharing behaviour from above sources follows the example of the agreement “the set of information includes the input and output of the Network side CSI generation part, or includes the output of the Network side CSI generation part only”.

                                                                                     *** Unchanged text is omitted ***

For the evaluation of NW/UE first separate training with dataset sharing manner for CSI compression for the pairing of 1 NW to 1 UE (Case 1), as compared to the case where the same set of dataset is applied for training the NW part model and training the UE part model, if the dataset#2 applied for training the UE/NW part model is a subset of the dataset#1 applied for training the NW/UE part model,

-        If the dataset#2 is appropriately selected, minor additional performance degradation can be achieved, as -0%~-0.59% gap is observed from 3 sources.

-        If the dataset#2 has a significantly reduced size compared to dataset#1, moderate/significant additional performance degradation may occur, as -0.6%~-4.83% gap is observed from 4 sources.

-        Note: the dataset sharing behavior from above sources follows the example of the agreement where “the set of information includes the input and output of the Network side CSI generation part, or includes the output of the Network side CSI generation part only”.

                                                                                     *** Unchanged text is omitted ***

For the evaluation of UE first separate training with dataset sharing manner for CSI compression for the pairing of 1 NW to 1 UE (Case 1), as compared to 1-on-1 joint training between the NW part model and the UE part model,

-        For the UE first separate training case where the same backbone is adopted for both the UE part model and the NW part model, minor degradation is observed in general for both the cases where the shared input of the UE side CSI reconstruction part is before or after quantization:

o    For the case where the shared input of the UE side CSI reconstruction part is after quantization, 9 sources observe -0%~-0.42% degradation, 2 sources observe -0.7%~-0.9% degradation, and 3 sources observe -1.05%~-1.8% degradation.

o    For the case where the shared input of the UE side CSI reconstruction part is before quantization, 3 sources observe -0%~-0.8% degradation, and 2 sources observe -1.3% -1.8%~-2.9% degradation.

-        Note: the dataset sharing behaviour from above sources follows the example of the agreement where “the set of information includes the input and label of the UE side CSI reconstruction part, or includes the input of the UE side CSI reconstruction part only”.

                                                                                     *** Unchanged text is omitted ***

For the scalability verification of AI/ML based CSI compression over various CSI payload sizes, compared to the generalization Case 1 where the AI/ML model is trained with dataset subject to a certain CSI payload size#B and applied for inference with a same CSI payload size#B,

-        For generalization Case 2, significant performance degradations are observed in general, as -5.3%~-14.7% degradations are observed by 2 sources.

-        Generalized performance of the AI/ML model can be achieved (-0%~-5.9%loss) under generalization Case 3 for the inference on CSI payload size#B, if the training dataset is constructed with data samples subject to multiple CSI payload sizes including CSI payload size#B, and an appropriate scalability solution is performed to scale the dimension of the AI/ML model, shown by 13 sources (10 sources showing -0%~-2.2% loss, 7 sources showing -2.3%~-5.9% loss, 5 sources showing positive gain). The scalability solution is adopted as follows:

o    Pre/post-processing of truncation/padding, adopted by 6 sources, showing -0% ~-5.9% loss or positive gain.

o    Various quantization granularities, adopted by 1 source, showing -0.7% loss or positive gain.

o    Adaptation layer in the AL/ML model, adopted by 6 sources, showing -0%~-4.78% loss or positive gain.

o    Finetuning models on CSI payload size#B, showing loss [0%~-2.2%] by 2 sources

o    Note: Significant degradations of up to -14.22% are still observed by 2 sources for generalization Case 3.

-        Generalized performance of the AI/ML model can also be achieved by finetuning models on CSI payload size#B, showing loss [0%~-2.2%] by 2 sources

The above results are based on the following assumptions:

*** Unchanged text is omitted ***

 

Agreement

Capture the following high level observations for CSI prediction to section 6.2.2.8 of TR 38.843:

·       From the perspective of model input/output type, it is more beneficial in performance by considering raw channel matrix as the model input than precoding matrix

·       The gain of AI/ML based CSI prediction over the benchmark of the nearest historical CSI is impacted by the length of the observation window length, prediction window length, and UE speed

·       From the perspective of generalization over various several UE speeds that have been evaluated, compared to generalization Case 1 where the AI/ML model is trained with dataset subject to a certain UE speed#B and applied for inference with a same UE speed#B,

o   For generalization Case 2 where the AI/ML model is trained with dataset from a different UE speed#A, generalized performance may be achieved for some certain combinations of UE speed#A and UE speed#B but not for others

o   For generalization Case 3 where the training dataset is constructed with data samples subject to multiple UE speeds including UE speed#B, generalized performance of the AI/ML model can be achieved in general

 

Final summary in R1-2310686.

 

=====================================================================================

R1-2310403         FL summary #1 on remaining open aspects of AI/ML positioning          Moderator (vivo)

From Wednesday session

Agreement

Capture the following TP in Section 8 of the 3GPP TR 38.843 for the conclusion on AI/ML positioning part.

-------------------------------------------- Start of Text Proposal ----------------------------------------------------------------

This study focused on the analysis of potential enhancements necessary to enable AI/ML for positioning accuracy enhancements with NR RAT-dependent positioning methods.

Evaluation scenarios and KPIs were identified for system level analysis of AI/ML enabled RAT-dependent positioning techniques as described in Section 6.4.

Direct AI/ML positioning and AI/ML assisted positioning were identified and selected as the representative sub-use cases. Evaluation results have shown that in considered evaluation scenarios (i.e., InF-DH, and other InF scenarios), both direct AI/ML positioning and AI/ML assisted can significantly improve the positioning accuracy compared to existing RAT-dependent positioning methods. Various aspects of AI/ML for positioning accuracy enhancement were investigated and evaluated as described in Section 6.4 that provides summary of evaluation results from different sources.

The necessity, feasibility and potential enhancements to facilitate the support of AI/ML for positioning accuracy enhancements with NR RAT-dependent positioning methods were studied and the outcome are outlined in Section 7.

Measurements, signalling and procedures were studied to enable AI/ML for positioning accuracy enhancements with NR RAT-dependent positioning methods and is recommended to be further investigated in normative work, and specified if necessary.

A variety of enhancements for measurements (e.g., based on extensions to current positioning measurements or with new measurements) were identified as potentially beneficial (e.g., trade-off positioning accuracy requirement and signalling overhead) and are recommended to be investigated further and if needed, specified during normative work.

Based on conducted analysis, it is recommended to proceed with normative work for AI/ML based positioning.

-------------------------------------------- End of Text Proposal -----------------------------------------------------------------

 


 RAN1#115

8.14   Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR Air Interface

Please refer to RP-221348 for detailed scope of the SI.

 

[115-R18-AI/ML] – Taesang (Qualcomm)

Email discussion on AI/ML

-        To be used for sharing updates on online/offline schedule, details on what is to be discussed in online/offline sessions, tdoc number of the moderator summary for online session, etc

 

R1-2310933         Text Proposals to TR 38.843           Ericsson Inc.

R1-2312055         Updated TR 38.843 including RAN1 agreements from RAN1#114bis     Qualcomm Incorporated

Tuesday decision

Agreement

The updated TR (R1-2312055) is endorsed with an update to remove RAN1 response to RAN2 LS part B.

8.14.1    General aspects of AI/ML framework

Including characterization of defining stages of AI/ML algorithm and associated complexity, UE-gNB collaboration, life cycle management, dataset(s), and notation/terminology. Also including any common aspects of evaluation methodology.

 

R1-2310818         Discussion on remaining open issues of AI/ML for air-interface general framework             FUTUREWEI

R1-2310844         Remaining issues on general aspects of AI/ML framework              Huawei, HiSilicon

R1-2310977         Discussion on general aspects of AI/ML framework   Continental Automotive

R1-2310987         Discussion on general aspects of common AI PHY framework              ZTE

R1-2310999         Discussion on remaining issues of AI/ML general framework              Ericsson

R1-2311047         Discussion on general aspects of AI/ML framework   Fujitsu

R1-2311114         Discussions on AI/ML framework  vivo

R1-2311151         General aspects of AI/ML framework for NR air interface              Intel Corporation

R1-2311182         Discussion on general aspects of AIML framework    Spreadtrum Communications

R1-2311269         On general aspects of AI/ML framework      OPPO

R1-2311326         General aspects of AI/ML framework           CATT

R1-2311390         Discussion on the remaining issues of AI/ML framework              xiaomi

R1-2311423         Discussion on general aspects of AI ML framework   NEC

R1-2311436         Remaining issues on AI/ML framework        LG Electronics

R1-2311449         Remaining issues on general aspects of AI/ML framework              Panasonic

R1-2311499         Discussion on general aspects of AI/ML framework   CMCC

R1-2311527         Remaining issues on general AI/ML framework         Sony

R1-2311529         General aspects of AI and ML framework for NR air interface              NVIDIA

R1-2311539         Remaining details on general aspects of AI/ML framework              InterDigital, Inc.

R1-2311572         On General Aspects of AI/ML Framework   Google

R1-2311639         Discussion on general aspects of AI/ML framework   NTT DOCOMO, INC.

R1-2311704         Discussion on general aspect of AI/ML framework     Apple

R1-2311760         Discussion on general aspects of AI_ML framework for NR air interface ETRI

R1-2311783         Remaining Issues on General Aspects of AI/ML         Nokia, Nokia Shanghai Bell

R1-2311864         Views on the remaining general aspects of AI/ML framework              Samsung

R1-2311888         Discussion on general aspects of AI/ML framework   Sharp

R1-2311908         Remaining issues on general aspects of AI/ML framework              Fraunhofer IIS, Fraunhofer HHI

R1-2311936         Remaining issues on general aspects of AI/ML framework              Ruijie Network Co. Ltd

R1-2311940         On general aspects of AI/ML framework      Lenovo

R1-2312056         General aspects of AI-ML framework           Qualcomm Incorporated

R1-2312087         General Aspects of AI/ML framework          AT&T

R1-2312090         General aspects of AI/ML framework for NR air interface              Baicells

R1-2312106         Discussions on General Aspects of AI/ML Framework              Indian Institute of Tech (M), IIT Kanpur

R1-2312120         Discussions on General Aspects of AI_ML Framework            IIT Kanpur, Indian Institute of Technology Madras (IITM)

R1-2312174         Prediction of untransmitted beams in a UE-side AI-ML model              Rakuten Symphony

 

R1-2312402         Summary#1 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Tuesday session

Agreement

For model identification of UE-side or UE-part of two-sided models, further clarification is made as follows.

·       The following are example use cases of online model identification (i.e., Type B1 and B2)

o   Model identification in model transfer from NW to UE

o   Model identification with data collection related configuration(s) and/or indication(s) and/or dataset transfer

o   Model identification with monitoring related configuration(s)/procedure(s)

o   Model identification due to update on UE-side model operations

·       The following are example use cases of offline model identification (i.e., Type A)

o   To align information and/or indication on NW-side additional conditions offline

o   Two-sided model pairing

o   Model identification followed by data collection related configuration(s) and/or indication(s)

o   Model identification followed by monitoring related configuration(s) and/or indication(s), e.g., for UE to identify an applicable model to NW after monitoring candidate models.

o   Model identification to enable monitoring at the NW/UE sides, e.g., to achieve consistency between training and inference regarding NW-side additional conditions (if identified) via monitoring

·       Note: Offline model identification may be applicable for some of the above example use cases

Friday decision: Above agreement is replaced by:

Agreement

For model identification of UE-side or UE-part of two-sided models, further clarification is made as follows.

·       The following are example use cases Type B1 and B2.

o   Model identification in model transfer from NW to UE.

o   Model identification with data collection related configuration(s) and/or indication(s) and/or dataset transfer.

·       Note: Other example use cases are not precluded.

·       Note: Offline model identification may be applicable for some of the above example use cases.

 

 

R1-2312403         Summary#2 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

Presented in Wednesday session.

 

R1-2312404         Summary#3 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Thursday session

Agreement

For model delivery/transfer to UE (for UE-side models and UE-part of two-sided models):

·       Model delivery/transfer to UE, if feasible, may be beneficial to handle scenario/configuration specific (including site-specific configuration/channel conditions) models (i.e., when a single model cannot generalize well to multiple scenarios/configurations/sites), to reduce the device storage requirement.

·       Model delivery/transfer to UE after offline compiling and/or testing may be friendlier from UE’s implementation point of view compared to the case without offline compiling and/or testing. On the other hand, the case without offline compiling and/or testing (that can update parameter with known model structure), may have benefit at least in terms of shorter model parameter update timescale.

·       For model trained at network side, Case y (w/ network-side training) and Case z2 may incur the burden of offline cross-vendor collaboration such as sending a model to the UE-side and/or compiling a model.

·       For model trained at UE side/neutral site, Case z1 and Case z3 may incur the burden of offline cross-vendor collaboration to send the trained model from the UE-side to the network, compared to Case y (w/ UE-side training) which does not have such burden.

·       Model storage at the 3gpp network, compared to storing the model outside the 3gpp network, may come with 3gpp network side burden on model maintenance/storage.

·       Proprietary design disclosure concern may arise from model training and/or model storage at the network side compared to other cases (such as case y with UE side training) which does not have such issue.

 

R1-2312405         Summary#4 of General Aspects of AI/ML Framework              Moderator (Qualcomm)

From Friday session

Agreement

Capture the following into the conclusion section of the AI/ML TR.

The following aspects have been studied for the general framework of AI/ML over air interface for one-sided models and two-sided models.

·       Various Network-UE Collaboration Levels

·       Functionality-based LCM and model-ID-based LCM

·       Functionality/model selection, activation, deactivation, switching, fallback

·       Functionality identification and model identification

·       Data collection

·       Performance monitoring

·       Various model identification Types and their use cases

·       Reporting of applicable functionalities/models

·       Method(s) to ensure consistency between training and inference regarding NW-side additional conditions (if identified) for inference at UE

·       Model delivery/transfer and analysis of various model delivery/transfer Cases

The above studied aspects for General Framework can be considered for developing/specifying AI/ML use cases and common framework (if needed for some aspects) across AI/ML use cases.

 

 

Final summary in R1-2312407.

8.14.2    Other aspects on AI/ML for CSI feedback enhancement

Including potential specification impact. Consider RAN agreement from RAN#100 in RP-231481 (proposal 1).

 

R1-2310819         Discussion on remaining open issues for other aspects of AI/ML for CSI feedback enhancement        FUTUREWEI

R1-2310845         Remaining issues on AI/ML for CSI feedback enhancement              Huawei, HiSilicon

R1-2310914         Discussions on AI-CSI       Ericsson

R1-2310984         Remaining issues discussion on other aspects of AI/ML for CSI feedback enhancement      SEU

R1-2310988         Discussion on other aspects for AI CSI feedback enhancement              ZTE

R1-2311048         Views on specification impact for CSI feedback enhancement              Fujitsu

R1-2311115         Other aspects on AI/ML for CSI feedback enhancement              vivo

R1-2311149         Discussion on AI/ML for CSI feedback         Intel Corporation

R1-2311183         Discussion on other aspects on AIML for CSI feedback              Spreadtrum Communications

R1-2311270         On other aspects of AI/ML for CSI feedback enhancement              OPPO

R1-2311327         Other aspects for AI/ML CSI feedback enhancement  CATT

R1-2311391         Remaining issues on specification impact for CSI feedback based on AI/ML             xiaomi

R1-2311415         Other aspects on AI/ML for CSI feedback enhancement              NEC

R1-2311437         Remaining issues on AI/ML for CSI enhancement      LG Electronics

R1-2311446         Discussion on AI/ML for CSI feedback enhancement Panasonic

R1-2311500         Discussion on other aspects on AI/ML for CSI feedback enhancement       CMCC

R1-2311528         Remaining issues on CSI measurement enhancements via AI/ML              Sony

R1-2311530         AI and ML for CSI feedback enhancement   NVIDIA

R1-2311540         Remaining details on other aspects for CSI feedback enhancement              InterDigital, Inc.

R1-2311554         Discussion on AI/ML for CSI feedback enhancement China Telecom

R1-2311573         On Enhancement of AI/ML based CSI           Google

R1-2311640         Discussion on AI/ML for CSI feedback enhancement NTT DOCOMO, INC.

R1-2311705         Discussion on other aspects of CSI others     Apple

R1-2311784         Other aspects on AI/ML for CSI feedback enhancement              Nokia, Nokia Shanghai Bell

R1-2311865         Views on remaining aspects on AI/ML for CSI feedback enhancement       Samsung

R1-2311941         Further aspects of AI/ML for CSI feedback  Lenovo

R1-2311993         Other aspects on AI/ML  for CSI Feedback Enhancement              MediaTek Inc.

R1-2312057         Other aspects on AI/ML for CSI feedback enhancement              Qualcomm Incorporated

R1-2312088         Discussion on AI/ML for CSI feedback enhancement AT&T

R1-2312107         Discussions on Other Aspects on AI/ML for CSI Feedback Enhancement       Indian Institute of Tech (M), IIT Kanpur

R1-2312129         Other aspects on AI/ML for CSI feedback enhancement              ITL

R1-2312173         Varying CSI feedback granularity based on channel conditions              Rakuten Symphony

 

R1-2312333         Summary #1 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Tuesday session

Agreement

Adopt the following TP to TR 38.843:

-------------------------------------------- Start of Text Proposal --------------------------------------------------------      

5.1             CSI feedback enhancement

*** Unchanged text is omitted ***

In CSI compression using two-sided model use case, feasibility and procedure to align the information that enables the UE to select a CSI generation model(s) compatible with the CSI reconstruction model(s) used by the gNB is studied. 

*** Unchanged text is omitted ***

In CSI compression using two-sided model use case, at least the following options have been proposed by companies to define the pairing information used to enable the UE to select a CSI generation model(s) that is compatible with the CSI reconstruction model(s) used by the gNB:

-                 Option 1: The pairing information is in the forms of the CSI reconstruction model ID that NW will use.

-                 Option 2: The pairing information is in the forms of the CSI generation model ID that the UE will use.

-                 Option 3: The pairing information is in the forms of the paired CSI generation model and CSI reconstruction model ID.

-                 Option 4: The pairing information is in the forms of by the dataset ID during type 3 sequential training.

-                 Option 5: The pairing information is in the forms of a training session ID to a prior training session (e.g., API) between NW and UE.

-                 Option 6: The pairing information is up to UE/NW offline co-engineering alignment, transparent to 3GPP specification.

-                 Note: the disclosure of the vendor information during the model pairing procedure and model identification procedure should be considered.

-                 Note: If each UE side model is compatible with all NW side model, the information is not needed for the UE.

-                 Note: Above does not imply there is a need for a central entity for defining/storing/maintaining the IDs. 

For CSI compression use case:

-           For model training, training data can be generated by UE/gNB

-           For NW-part of two-sided model inference, input data can be generated by UE and terminated at gNB.

-           For UE-part of two-sided model inference, input data is internally available at UE.

-           For performance monitoring at the NW side, calculated performance metrics (if needed) or data needed for performance metric calculation (if needed) can be generated by UE and terminated at gNB

For CSI prediction use cases:

-           For model training, training data can be generated by UE.

-           For UE-side model inference, input data is internally available at UE.

-           For performance monitoring at the NW side, calculated performance metrics (if needed) or data needed for performance metric calculation (if needed) can be generated by UE and terminated at gNB.

For CSI prediction using UE side model use case, at least the following aspects have been proposed by companies on performance monitoring for functionality-based LCM:

-           Type 1:

o    UE calculate the performance metric(s)

o    UE reports performance monitoring output that facilitates functionality fallback decision at the network

§   Performance monitoring output details can be further defined

§   NW may configure threshold criterion to facilitate UE side performance monitoring (if needed).

o    NW makes decision(s) of functionality fallback operation (fallback mechanism to legacy CSI reporting).

-           Type 2:

o    UE reports predicted CSI and/or the corresponding ground truth 

o    NW calculates the performance metrics.

o    NW makes decision(s) of functionality fallback operation (fallback mechanism to legacy CSI reporting).

-           Type 3:

o    UE calculate the performance metric(s)

o    UE report performance metric(s) to the NW

o    NW makes decision(s) of functionality fallback operation (fallback mechanism to legacy CSI reporting).

-           Functionality selection/activation/ deactivation/switching what is defined for other UE side use cases can be reused, if applicable.

-           Configuration and procedure for performance monitoring

-           CSI-RS configuration for performance monitoring

-           Performance metric including at least intermediate KPI (e.g., NMSE or SGCS)

-           UE report, including periodic/semi-persistent/aperiodic reporting, and event driven report.

-           Note: down selection is not precluded.

-           Note: UE may make decision within the same functionality on model selection, activation, deactivation, switching operation transparent to the NW.

*** Unchanged text is omitted ***

7.2.2         CSI feedback enhancement

Items considered for study the necessity, feasibility, potential specification impact:

In CSI compression using two-sided model use case:

*** Unchanged text is omitted ***

Potential specification enhancement on:

-           CSI-RS configurations (not including CSI-RS pattern design enhancements)

-           CSI configuration

o    For network to indicate CSI reporting related information, e.g., gNB indication to the UE of one or more of following:

§   Information indicating CSI payload size

§   Information indicating quantization method/granularity

§   Rank restriction

§   Other payload related aspects

-           CSI reporting configurations

o    For UE determination/reporting of the actual CSI payload size, UE reports related information as configured by the NW

-           CSI report UCI mapping/priority/omission

-           CSI processing procedures

In CSI compression using two-sided model use case, feasibility and procedure to align the information that enables the UE to select a CSI generation model(s) compatible with the CSI reconstruction model(s) used by the gNB is studied. At least the following options have been proposed by companies to define the pairing information used to enable the UE to select a CSI generation model(s) that is compatible with the CSI reconstruction model(s) used by the gNB:

-           Option 1: The pairing information is in the forms of the CSI reconstruction model ID that NW will use.

-           Option 2: The pairing information is in the forms of the CSI generation model ID that the UE will use.

-           Option 3: The pairing information is in the forms of the paired CSI generation model and CSI reconstruction model ID.

-           Option 4: The pairing information is in the forms of by the dataset ID during type 3 sequential training.

-           Option 5: The pairing information is in the forms of a training session ID to a prior training session (e.g., API) between NW and UE.

-           Option 6: The pairing information is up to UE/NW offline co-engineering alignment, transparent to 3GPP specification.

-           Note: the disclosure of the vendor information during the model pairing procedure and model identification procedure should be considered.

-           Note: If each UE side model is compatible with all NW side model, the information is not needed for the UE.

-           Note: Above does not imply there is a need for a central entity for defining/storing/maintaining the IDs. 

In CSI prediction using UE-sided model use case:

Data collection:

In CSI prediction using UE sided model use case, at least the following aspects have been proposed by companies on data collection, including:

-           Signalling and procedures for the data collection

o    Data collection indicated by NW

o    Requested from UE for data collection

-           CSI-RS configuration

-           Assistance information for categorizing the data, if needed

o    The provision of assistance information needs to consider feasibility of disclosing proprietary information to the other side.

For CSI prediction using UE side model use case, at least the following aspects have been proposed by companies on performance monitoring for functionality-based LCM:

-           Type 1:

o    UE calculate the performance metric(s)

o    UE reports performance monitoring output that facilitates functionality fallback decision at the network

§   Performance monitoring output details can be further defined

§   NW may configure threshold criterion to facilitate UE side performance monitoring (if needed).

o    NW makes decision(s) of functionality fallback operation (fallback mechanism to legacy CSI reporting).

-           Type 2:

o    UE reports predicted CSI and/or the corresponding ground truth 

o    NW calculates the performance metrics.

o    NW makes decision(s) of functionality fallback operation (fallback mechanism to legacy CSI reporting).

-           Type 3:

o    UE calculate the performance metric(s)

o    UE report performance metric(s) to the NW

o    NW makes decision(s) of functionality fallback operation (fallback mechanism to legacy CSI reporting).

-           Functionality selection/activation/ deactivation/switching what is defined for other UE side use cases can be reused, if applicable.

-           Configuration and procedure for performance monitoring

-           CSI-RS configuration for performance monitoring

-           Performance metric including at least intermediate KPI (e.g., NMSE or SGCS)

-           UE report, including periodic/semi-persistent/aperiodic reporting, and event driven report.

-           Note: down selection is not precluded.

-           Note: UE may make decision within the same functionality on model selection, activation, deactivation, switching operation transparent to the NW.

-------------------------------------------- End of Text Proposal --------------------------------------------------------      

 

Agreement

Adopt the following TP to TR 38.843:

-------------------------------------------- Start of Text Proposal --------------------------------------------------------      

5.1             CSI feedback enhancement

*** Unchanged text is omitted ***

Considered AI/ML model training collaborations include:

-           Type 1: Joint training of the two-sided model at a single side/entity, e.g., UE-sided or Network-sided.

-           Type 2: Joint training of the two-sided model at network side and UE side, respectively.

-           Type 3: Separate training at network side and UE side, where the UE-side CSI generation part and the network-side CSI reconstruction part are trained by UE side and network side, respectively.

-           Note: Joint training means the generation model and reconstruction model should be trained in the same loop for forward propagation and backward propagation. Joint training could be done both at single node or across multiple nodes(e.g., through gradient exchange between nodes).

-           Note: Separate training includes sequential training starting with UE side training, or sequential training starting with NW side training [, or parallel training] at UE and NW

-           Note: training collaboration Type 2 over the air interface for model training (not including model update) is concluded to be deprioritized in Rel-18 SI.

For Type 2 (Joint training of the two-sided model at network side and UE side, respectively), note that joint training includes both simultaneous training and sequential training, in which the pros and cons could be discussed separately. Further, note that Type 2 sequential training includes starting with UE side training, or starts with NW side training.  

-------------------------------------------- End of Text Proposal --------------------------------------------------------

 

Agreement

In CSI compression using two-sided model use case, the following table captures the pros/cons of training collaboration types 2 and type 3: 

Training types

Characteristics

Type 2

Type 3

Simultaneous

Sequential

NW first

NW first

 UE first

Feasibility of allowing UE side and NW side to develop/update models separately

Infeasible

 

No consensus

 

Feasible. 

Feasible

Extendibility: to train new UE-side model compatible with NW-side model in use; 

Not support

 

 

Support

Support

No consensus

Extendibility: To train new NW-side model compatible with UE-side model in use

Not support

 

 

Not Support

No consensus

Support

 

In CSI compression using two-sided model use case, the following table captures the pros/cons of training collaboration types 1:

Training types

Characteristics

Type1: NW side

Type 1: UE side

Unknown model structure at UE

Known model structure at UE

Unknown model structure at NW

Known model structure at NW

Feasibility of allowing UE side and NW side to develop/update models separately

gNB: Feasible

UE: Not feasible due to type 1 definition 

gNB: Feasible with restriction for CSI reconstruction model

UE: Not feasible due to type 1 definition

gNB: Not feasible due to type 1 definition

UE: Feasible

gNB: Not feasible due to type 1 definition

UE: Feasible with restriction for CSI generation model

Extendibility: to train new UE-side model compatible with NW-side model in use; (note x2)

Yes

 

Yes

 

No consensus

No consensus

Extendibility: To train new NW-side model compatible with UE-side model in use (note x2)

No consensus

No consensus

Yes

Yes

Note 4: Flexibility after deployment is evaluated by the amount of offline cross-vendor co-engineering effort. Flexible indicates minimum additional co-engineering between vendors, semi-flexible indicates additional co-engineering effort between vendors. 

Note x2: the performance of the new model is similar to the performance of sequential training when training type 1 support freezing a part of two sided model

 

Agreement

In CSI compression using two-sided model use case, in order to select a CSI generation model compatible with the CSI reconstruction model used by the gNB, the following aspects have been proposed:

·       Pairing information can be established based on model identification.

 

R1-2312334         Summary #2 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Wednesday session

Agreement (further revised as shown in Thursday session)

Capture the following summary in Section 8 of the 3GPP TR 38.843 on AI/ML based CSI compression sub-use case.

 

-------------------------------------------- Start of Text Proposal --------------------------------------------------------

The performance benefit and potential specification impact were studied for AI/ML based CSI compression sub use case.

·        Evaluation has been performed to assess AI/ML based CSI compression from various aspects, including performance gain over non-AI/ML benchmark, model input/output type, CSI feedback quantization methods, ground-truth CSI format, monitoring, generalization, training collaboration types, etc. Some aspects are studied but not fully investigated, including the options of CQI/RI calculation, the options of rank>1 solution.

·        Performance gain over baseline [and computation complexity in FLOPs] are summarized in clause 6.2.2.8 of TR 38.843.

·        Potential specification impact on NW side/UE side data collection, dataset delivery, quantization alignment between CSI generation part at the UE and CSI reconstruction part at the NW, CSI report configuration, CSI report format, pairing information/procedure and monitoring approach were investigated but not all aspects were identified.

·        The pros and cons are analysed for each training collaboration types, and each training collaboration type has its own benefits and limitations in different aspects. The study has investigated the feasibility of the studied training collaboration types and necessity of corresponding potential RAN1 specification impact. However, not all aspects have been concluded.

 -------------------------------------------- End of Text Proposal --------------------------------------------------------

 

 

R1-2312335         Summary #3 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Thursday session

Agreement

Capture the following as a conclusion in section 8 of the TR.

·       From RAN1 perspective, there is no consensus on the recommendation of CSI compression for normative work.

·       At least the following aspects are the reasons for the lack of RAN1 consensus on the recommendation of CSI compression for normative work.

o   Trade-off between performance and complexity/overhead

o   Issues related to inter-vendor training collaboration

·       Other aspects that require further study/conclusion are captured in the summary

Agreement

Capture the following summary in Section 8 of the 3GPP TR 38.843 on AI/ML based CSI prediction sub-use case.

-------------------------------------------- Start of Text Proposal ---------------------------------------------

The performance and potential specification impact were studied for AI/ML based UE side CSI prediction sub use case.

·       Performance compared with baseline is summarized in clause 6.2.2.8 of TR 38.843.

·       Potential specification impact on data collection and performance monitoring are discussed in section 7.2.2 of TR 38.843.

-------------------------------------------- End of Text Proposal ----------------------------------------------

 

 

R1-2312336         Summary #4 on other aspects of AI/ML for CSI enhancement              Moderator (Apple)

From Friday session

Agreement

Capture the following text for CSI prediction summary agreed in RAN1 115, for section 8 of TR38.843.

-------------------------------------------- Start of Text Proposal ---------------------------------------------

The performance and potential specification impact were studied for AI/ML based UE side CSI prediction sub use case.

·        Evaluation has been performed to assess AI/ML based CSI prediction from various aspects, including performance compared to baseline, model input/output type, generalization over UE speed, etc. Some aspects are studied but lack observations, including scalability over various configurations and generalization over other scenarios and approach of fine tuning. Performance monitoring accuracy is not evaluated. 

·        Performance compared with baseline is summarized in clause 6.2.2.8 of TR 38.843.

·        Potential specification impact on data collection and performance monitoring are discussed in section 7.2.2 of TR 38.843.

o    Limited specification aspects were considered.

-------------------------------------------- End of Text Proposal ----------------------------------------------

 

Agreement

Capture the following conclusion in section 8 of the TR 38.843

 

Agreement

Adopt the following TP to TR 38.843:

-------------------------------------------- Start of Text Proposal ---------------------------------------------

5.1             CSI feedback enhancement

*** Unchanged text is omitted ***

In CSI compression using two-sided model use case with training collaboration Type 3, for sequential training, at least the following aspects have been identified for dataset delivery from RAN1 perspective, including:  

-        Dataset and/or other information delivery from UE side to NW side, which can be used at least for CSI reconstruction model training

-        Dataset and/or other information delivery from NW side to UE side, which can be used at least for CSI generation model training

-        Potential dataset delivery methods including offline delivery, and over the air delivery

-        Data sample format/type

-        Quantization/de-quantization related information

*** Unchanged text is omitted ***

7.2.2         CSI feedback enhancement

Items considered for study the necessity, feasibility, potential specification impact:

In CSI compression using two-sided model use case:

*** Unchanged text is omitted ***

NW side data collection:

·        Enhancement of SRS and/or CSI-RS measurement and/or CSI reporting to enable higher accuracy measurement.

·        Contents of the ground-truth CSI including:  

o    Data sample type, e.g., precoding matrix, channel matrix etc.

o    Data sample format: scaler quantization and/or codebook-based quantization (e.g., e-type II like).

o    Assistance information (e.g., time stamps, and/or cell ID, Assistance information for Network data collection for categorizing the data in forms of ID for the purpose of differentiating characteristics of data due to specific configuration, scenarios, site etc., and data quality indicator)

·        Latency requirement for data collection

·        Signaling for triggering the data collection

·        Ground-truth CSI report for NW side data collection for model performance monitoring, including:

o    Scalar quantization for ground-truth CSI

o    Codebook-based quantization for ground-truth CSI

o    RRC signalling and/or L1 signalling procedure to enable fast identification of AI/ML model performance

o    Aperiodic/semi-persistent or periodic ground-truth CSI report

·        Ground-truth CSI format for model training, including scalar or codebook-based quantization for ground-truth CSI. The number of layers for which the ground truth data is collected, and whether UE or NW determine the number of layers for ground-truth CSI data collection, are considered.

In CSI compression using two-sided model use case with training collaboration Type 3, for sequential training, at least the following aspects have been identified for dataset delivery from RAN1 perspective, including:

·        Dataset and/or other information delivery from UE side to NW side, which can be used at least for CSI reconstruction model training

·        Dataset and/or other information delivery from NW side to UE side, which can be used at least for CSI generation model training

·        Potential dataset delivery methods including offline delivery, and over the air delivery

·        Data sample format/type

·        Quantization/de-quantization related information

-------------------------------------------- End of Text Proposal ----------------------------------------------

 

Agreement

Adopt the following TP to TR 38.843:

-------------------------------------------- Start of Text Proposal ---------------------------------------------

Table 5.1-1: Pros and Cons of training collaboration Type 1

Characteristics \ Training Types

Type 1: NW side

Type 1: UE side

Unknown model structure at UE

Known model structure at UE

Unknown model structure at UE

Known model structure at UE

*** Unchanged text is omitted ***

Model performance based on evaluation in 9.2.2.1

Performance refers to 9.2.2.1 observations

Performance refers to clause 6.2.2

Performance refers to 9.2.2.1 observations

Performance refers to clause 6.2.2

Performance refers to 9.2.2.1 observations

Performance refers to clause 6.2.2

Performance refers to 9.2.2.1 observations

Performance refers to clause 6.2.2

 

Table 5.1-2: Pros and Cons of training collaboration Type 2 and Type 3

Characteristics \ Training Types

Type 2

Type 3

Simultaneous

Sequential  

NW first

 UE first

*** Unchanged text is omitted ***

Model performance based on evaluation in 9.2.2.1

Performance refers to 9.2.2.1 observations

Performance refers to clause 6.2.2

Performance refers to 9.2.2.1 observations

Performance refers to clause 6.2.2

Performance refers to 9.2.2.1 observations

Performance refers to clause 6.2.2

Performance refers to 9.2.2.1 observations

Performance refers to clause 6.2.2

-------------------------------------------- End of Text Proposal ----------------------------------------------

 

 

Final summary in R1-2312337.

8.14.33    Remaining aspects on AI/ML

To be used for finalization of TR conclusions and/or recommendations on ‘Evaluation on AI/ML for CSI feedback enhancement’, ‘Evaluation on AI/ML for beam management’, ‘Other aspects on AI/ML for beam management’, ‘Evaluation on AI/ML for positioning accuracy enhancement’, and ‘Other aspects on AI/ML for positioning accuracy enhancement’. Contributions are to be submitted only by FLs.

 

R1-2310846         Highlights for the evaluation on AI/ML based CSI feedback enhancement       Huawei, HiSilicon

R1-2310906         Remaining Aspects of AI/ML for Positioning Accuracy Enhancement       Ericsson

R1-2311194         Remaining open aspects of AI/ML positioning           Moderator (vivo)

R1-2311271         Other aspects on AI/ML for beam management          OPPO

R1-2311866         Remaining aspects for evaluation of AI/ML for beam management        Moderator (Samsung)

 

 

R1-2312383         Summary#1 for other aspects on AI/ML for beam management       Moderator (OPPO)

From Wednesday session

Agreement

Confirm the following working assumption (with modification in red) agreed in RAN1#114bis

Working Assumption

For AI-based beam management, from RAN1 perspective, at least the following are recommended for normative work

·       Both BM-Case1 and BM-Case2

o    BM-Case1: Spatial-domain DL Tx beam prediction for Set A of beams based on measurement results of Set B of beams

o    BM-Case2: Temporal DL Tx beam prediction for Set A of beams based on the historic measurement results of Set B of beams

·       DL Tx beam prediction for both UE-sided model and NW-sided model

·       Necessary signaling/mechanism(s) to facilitate data collection, model inference, and performance monitoring for both UE-sided model and NW-sided model

·       Signaling/mechanism(s) to facilitate necessary LCM operations via 3GPP signaling for UE-sided model

 

Agreement

Capture the following TP in Section 8 of the 3GPP TR 38.843 for the conclusion on AI/ML-based beam management:

-------------------------------------------- Start of Text Proposal ----------------------------------------------------------------

This study focuses on evaluation of potential benefits of AI/ML-based beam management and analysis of potential enhancements to enable AI/ML for beam management.

During the study, BM-Case1 (Spatial-domain downlink beam prediction) and BM-Case2 (Temporal downlink beam prediction), as described in Section 5.2, are selected as the representative sub use cases.

Evaluation scenarios and KPIs are described in Section 6.3.1, and the detailed evaluation results from different sources and the key observations are captured in Section 6.3.2.  Evaluation results have shown that it is beneficial to enable AI/ML for beam management in the considered evaluation scenarios.

The necessity, feasibility, benefit and potential specification impacts of potential enhancements to enable AI/ML for beam management were studied from different aspects, and the outputs are captured in Section 7.

-------------------------------------------- End of Text Proposal -----------------------------------------------------------------

 

 

Final summary in R1-2312384.

 

===========================================================================================

R1-2312445         FL summary #1 for remaining aspects for evaluation of AI/ML for beam management      Moderator (Samsung)

From Wednesday session

Agreement

Adopt the update of the following text proposal for TR 38.843:

==== Start of text proposal for TR 38.843 =======

6.3.2         Performance results

BM_Table 1 through BM_Table 5 in attached Spreadsheets for Beam Management evaluations present the performance results for:

-        BM_Table 1: Evaluation results for BMCase-1 without generalization

-        BM_Table 2: Evaluation results for BMCase-2 without generalization

-        BM_Table 3: Evaluation results for BMCase-1 with generalization for DL Tx beam prediction

-        BM_Table 4. Evaluation results for BMCase-1 with generalization for beam pair prediction

-        BM_Table 5. Evaluation results for BMCase-2 with generalization for DL Tx beam and beam pair prediction

In the evaluation, SLS are used for data generation for training/inference otherwise stated.

< Unchanged parts are omitted >

====== end of text proposal for TR 38.843 ======

 

Agreement

Adopt the update of BM_Table 1 and BM_Table 2 in BM_Evaluations_spreadsheets attached to TR 38.843 as in the attachments of R1-2312445.

 

 

R1-2312560         FL summary #2 for remaining aspects for evaluation of AI/ML for beam management      Moderator (Samsung)

From Friday session

Agreement

Adopt the update of the following text proposal for TR 38.843:

==== Start of text proposal for TR 38.843 =======

6.3.1         Evaluation assumptions, methodology and KPIs

Figure 6.3.1-1 provides an example for the inference procedure for beam management for BM-Case1 and BM-Case2. Measurements based on Set B of beams are used as model input. In addition, beam ID information may be also provided as input to the AI/ML model. Based on model output (e.g., probability of each beam in Set A to be the Top-1 beam, predicted L1-RSRPs), Top-1/N beam(s) among Set A of beams can be predicted and/or potentially with predicted L1-RSRPs (depending on the labeling). In the evaluation, for BM-Case 1, the measurements of Set B (otherwise stated) are used as model input to predict Top-1/N beams from Set A, and for BM-Case2, the measurements from historic time instance(s) are used as model input for temporal DL beam prediction of beams from Set A. In the evaluation, the cases that Set A and Set B are different (Set B is NOT a subset of Set A), and Set B is a subset of Set A for both BM-Case1 and BM-Case2, and case that Set A and Set B are the same for BM-Case2 are considered. And the performance of DL Tx beam prediction and DL Tx-Rx beam pair prediction is evaluated.

For both BM-Case1 and BM-Case2, UE can report the prediction result to NW based on the output of a UE-side model, or NW can predict the Top-1/N beam(s) based on the reported measurements of Set B for a NW-side model.

A white and black rectangular object with black text

Description automatically generated

Figure 6.3.1-1 An example of the inference procedure for beam management.

< Unchanged parts are omitted >

====== end of text proposal for TR 38.843 ======

 

 

===========================================================================================

R1-2312399         Summary#1 for CSI evaluation of [115-R18-AI/ML] Moderator (Huawei)

R1-2312400         Summary#2 for CSI evaluation of [115-R18-AI/ML]              Moderator (Huawei)

From Wednesday session

Agreement

Capture the following high-level observations for CSI compression to section 6.2.2.8 of TR 38.843

·       From the perspective of intermediate KPI based monitoring,

o   For the monitoring at NW side, increased monitoring accuracy can be achieved by considering R16 eType II CB with new/larger parameter(s) as the ground-truth CSI format for monitoring. On the other hand, the new/larger parameter(s) would lead to increased air-interface overhead compared to R16 eType II CB with legacy parameters.

o   For the monitoring at UE side, performance can be monitored with smaller air-interface overhead by considering proxy model at UE compared with monitoring at NW side. On the other hand, the monitoring accuracy may be impacted by the design/robustness of the proxy model.

o   Note: the complexity aspect for Case 1, Case 2-1 and Case 2-2 is not evaluated.

·       From the perspective of high resolution ground-truth CSI for training, compared to unquantized ground-truth CSI (e.g., Float32), taking R16 eType II CB with new/larger parameter(s) as the ground-truth CSI format for training data collection can achieve significant overhead reduction without causing severe performance degradation; taking scalar quantization format for training data collection can achieve moderate overhead reduction without causing severe performance degradation. On the other hand, the R16 eType II CB with new/larger parameter(s) would lead to increased overhead compared to R16 eType II CB with legacy parameters

o   For ground-truth CSI format, 5 sources observe R16 eType II CB with new/larger parameter(s) outperforms R16 eType II CB with legacy parameter, while one source observes R16 eType II CB with legacy parameter is already close to Float32 with particular dataset processing technique.

o   Note: the complexity aspect is not evaluated.

Agreement

Capture the following high-level observations for CSI compression to section 6.2.2.8 of TR 38.843

·       From the perspective of model input/output type, it is more beneficial by considering precoding matrix as the model input (for CSI generation part)/output (for CSI reconstruction part) than explicit channel matrix.

·       From the perspective of quantization methods for CSI feedback,

o   For the quantization awareness for training, it is beneficial to consider quantization aware training with fixed/pre-configured quantization method/parameters (Case 2-1) or jointly updated quantization method/parameters (Case 2-2) to avoid severe performance degradation. In particular, it is more beneficial in performance for Case 2-2 over Case 2-1 under vector quantization format (VQ).

o   For the quantization format, VQ format achieves comparable performance with scalar quantization format (SQ) in general, where VQ achieves better performance than SQ in some cases while worse in some other cases.

·       From the perspective of generalization over scenarios, or scalability over configurations that have been evaluated, compared to generalization Case 1 where the AI/ML model is trained with dataset subject to a certain scenario#B/configuration#B and applied for inference with a same scenario#B/configuration#B,

o   For generalization Case 2 where the AI/ML model is trained with dataset from a different scenario#A/configuration#A, generalized performance may be achieved for some certain combinations of scenario#A/configuration#A and scenario#B/configuration#B but not for others.

o   For generalization Case 3 where the training dataset is constructed with data samples subject to more than one scenario/configuration (evaluations studied up to four scenarios/configurations) including scenario#B/configuration#B, generalized performance of the AI/ML model can be achieved.

o   In particular, appropriate scalability solution (e.g., truncation/padding, adaptive quantization granularities, adaptation layer in the AI/ML model) may need to be performed to scale the dimensions of the AI/ML model when the training dataset includes data samples subject to configuration#A which has different input/output dimension than configuration#B.

·       From the perspective of training collaboration types, compared to 1-on-1 joint training, both multi-vendor joint training and separate training with procedures given in Section 6.2.1 may suffer performance loss.

o   In particular, for multi-vendor joint training, minor or moderate degradation is observed.

o   In particular, for separate training with procedure given in Section 6.2.1, the performance loss depends on the factors such as backbone alignment, and multi-vendor training behavior:

§  For separate training of 1 NW part model and 1 UE part model, under both NW first training and UE first training, if backbones are aligned between the two sides, minor degradation is observed; otherwise, additional degradation is observed, leading to minor or moderate performance degradation.

§  For NW first training with 1 UE part model to N>1 NW part models, or UE first training with 1 NW part model to M>1 UE part models, additional degradation is observed, leading to minor, moderate, or significant performance degradation, depending on the training approach.

§  As a note, other procedures of separate training are not extensively evaluated.

Agreement

Capture the following high-level observation for CSI prediction to section 6.2.2.8 of TR 38.843

·       From the perspective of basic performance gain over non-AI/ML benchmark, under the same UE speed for training and inference,

o   AI/ML based CSI prediction outperforms the benchmark of the nearest historical CSI in general, where the majority of sources observe up to 10.6% gain in terms of mean UPT.

o   for AI/ML based CSI prediction over non-AI/ML based CSI prediction, 3 sources observe up to 0.7%~7% gain while 2 sources observe performance loss of -0.1%~-17% in terms of mean UPT.

Agreement

Capture the following high-level observations for CSI compression to section 6.2.2.8 of TR 38.843

·       From the perspective of basic performance gain over non-AI/ML benchmark, AI/ML based CSI compression outperforms Rel-16 eType II CB in general under 1-on-1 joint training and generalization Case 1, where

o   0.2%~2%/-0.3%~6%/-4%~6% 7.4% gains of mean UPT as shown in Figure X1~Figure X3 are observed for Max rank 1/2/4, respectively, under RU≤39%.

o   0.1%~4%/-0.5%~10%/-1.8%~12.22% gains of mean UPT as shown in Figure Y1~Figure Y3 are observed for Max rank 1/2/4, respectively, under RU40%-69%.

o   0.23%~9%/-0.2%~15%/-1%~17% gains of mean UPT as shown in Figure Z1~Figure Z3 are observed for Max rank 1/2/4, respectively, under RU>70%.

A graph with red and blue dots

Description automatically generated

Figure X1 Mean UPT gain, Max Rank 1 (RU≤39%), x-axis means index of source

 

A graph with numbers and points

Description automatically generated

Figure X2 Mean UPT gain, Max Rank 2 (RU≤39%), x-axis means index of source

 

Figure X3 Mean UPT gain, Max Rank 4 (RU≤39%), x-axis means index of source

 

A graph of a number of people

Description automatically generated with medium confidence

Figure Y1 Mean UPT gain, Max Rank 1 (RU40%-69%), x-axis means index of source

 

A graph with numbers and points

Description automatically generated with medium confidence

Figure Y2 Mean UPT gain, Max Rank 2 (RU40%-69%), x-axis means index of source

 

A graph with numbers and points

Description automatically generated

Figure Y3 Mean UPT gain, Max Rank 4 (RU40%-69%), x-axis means index of source

 

A graph with numbers and points

Description automatically generated with medium confidence

Figure Z1 Mean UPT gain, Max Rank 1 (RU>70%), x-axis means index of source

A graph with red and blue dots

Description automatically generated

Figure Z2 Mean UPT gain, Max Rank 2 (RU>70%), x-axis means index of source

A graph with numbers and colored dots

Description automatically generated

Figure Z3 Mean UPT gain, Max Rank 4 (RU>70%), x-axis means index of source

 

 

R1-2312401         Summary#3 for CSI evaluation of [115-R18-AI/ML]              Moderator (Huawei)

From Thursday session

Agreement:

Capture the following high-level observations for CSI compression to section 6.2.2.8 of TR 38.843

·       From the perspective of CSI overhead reduction over non-AI/ML, AI/ML based CSI compression achieves CSI feedback reduction compared with Rel-16 eType II CB in general under 1-on-1 joint training and generalization Case 1, where 4 sources observe the CSI feedback overhead reduction of 10.24%~60%/10%~58.33%/8%~79% for Max rank 1/2/4, respectively, under FTP traffic.

Agreement:

Capture the following high-level observations for CSI compression to section 6.2.2.8 of TR 38.843

·       From the perspective of AI/ML complexity, a majority of 25 sources adopt the CSI generation model subject to the FLOPs from 10M to 800M, and 26 sources adopt the CSI reconstruction model subject to the FLOPs from 10M to 1100M; on the other hand, the actual model complexity may differ from the model complexity in the evaluation with respect to platform-dependent optimization on model implementations. In addition, the complexity between AI/ML and non-AI/ML benchmark is not compared.

 

Final summary in R1-2312603.

R1-2312604         TP on CSI evaluation for TR38.843            Moderator (Huawei)

From Friday session

Agreement

The TP in section 2 of R1-2312604 is endorsed for the TR on AI/ML with the following revised text:

·       From the perspective of AI/ML complexity, a majority of 25 sources adopt the CSI generation model subject to the computational complexity in units of FLOPs from 10M to 800M, and 26 sources adopt the CSI reconstruction model subject to the FLOPs from 10M to 1100M. The actual model complexity may differ from the model complexity in the evaluation with respect to platform-dependent optimization on model implementations. In addition, the complexity between AI/ML and non-AI/ML benchmark is not compared.

·       From the perspective of AI/ML complexity, a majority of sources adopt the model subject to the computational complexity in units of FLOPs from 0.1M to 1000M. The actual model complexity may differ from the model complexity in the evaluation with respect to platform-dependent optimization on model implementations. In addition, the complexity between AI/ML and non-AI/ML benchmark is not compared.

 

===========================================================================================

R1-2312413         Summary #1 on Remaining Aspects of Evaluating AI/ML for Positioning Accuracy Enhancement Moderator (Ericsson)

R1-2312414         Summary #2 on Remaining Aspects of Evaluating AI/ML for Positioning Accuracy Enhancement Moderator (Ericsson)

R1-2312415         Summary #3 on Remaining Aspects of Evaluating AI/ML for Positioning Accuracy Enhancement           Moderator (Ericsson)

From Wednesday session

Agreement

Adopt the proposal below to correct the placement of the complexity observation.

======================= Start of text proposal to TR 38.843 v1.2.0 ====================

6.4.2         Performance results

<Unchanged text is omitted>

Model monitoring

For AI/ML assisted positioning, evaluation results have been provided by sources for label-based model monitoring methods. With TOA and/or LOS/NLOS indicator as model output, the estimated ground truth label (i.e., TOA and/or LOS/NLOS indicator) is provided by the location estimation from the associated conventional positioning method. The associated conventional positioning method refers to the method which utilizes the AI/ML model output to determine target UE location.

For both direct AI/ML and AI/ML assisted positioning, evaluation results have been provided by sources to demonstrate the feasibility of label-free model monitoring methods.

Model complexity and computational complexity

For AI/ML based positioning method, companies have submitted evaluation results to show that for their evaluated cases, for a given company’s model design, a lower complexity (model complexity and computational complexity) model can still achieve acceptable positioning accuracy (e.g., <1m), albeit degraded, when compared to a higher complexity model.

 

6.4.2.2      Generalization Aspects

Observations:

Direct AI/ML positioning

...

For AI/ML based positioning method, companies have submitted evaluation results to show that for their evaluated cases, for a given company’s model design, a lower complexity (model complexity and computational complexity) model can still achieve acceptable positioning accuracy (e.g., <1m), albeit degraded, when compared to a higher complexity model.

<Unchanged text is omitted>

=======================  End of text proposal to TR 38.843 v1.2.0 ====================

 

Agreement

Adopt the text proposal to TR 38.843 to better group the results for “Direct AI/ML positioning”, for “AI/ML assisted positioning”, and for both.

======================= Start of text proposal to TR 38.843 v1.2.0 ====================

6.4.2         Performance results

<Unchanged text is omitted>

For AI/ML assisted positioning, the positioning accuracy at model inference is affected by the type of model input.  Evaluation results show that if changing model input type while holding other parameters (e.g., Nt, N't, Nport, N'TRP) the same,

·        The positioning error of PDP as model input is 1.17 ~ 1.63 times the positioning error of CIR as model input.

·        The positioning error of DP as model input is 1.33 ~ 2.01 times the positioning error of CIR as model input.

 

6.4.2.1      Training Data Collection

Observations:

Direct AI/ML positioning

...

6.4.2.2      Generalization Aspects

Observations:

Direct AI/ML positioning

...

AI/ML assisted positioning

...

Both direct AI/ML positioning and AI/ML assisted positioning

For both direct AI/ML and AI/ML assisted positioning, evaluation results submitted show that with CIR model input for a trained model,

-      For two SNR/SINR values S1 (dB) and S2 (dB), S1 ≥ S2 + 15 dB, positioning error of a model trained with data of S1 (dB) and tested with data of S2 (dB) is more than 5.75 times that of the model trained and tested with data of S1 (dB).

-      For two SNR/SINR values S1 (dB) and S2 (dB), S1 ≤ S2 – 10 dB, the generalization performance of a model trained with data of S1 (dB) and tested with data of S2 (dB) is better than the performance of a model trained with data of S2 (dB) and tested with data of S1 (dB). Positioning error of a model trained with data of S2 (dB) and tested with data of S1 (dB) is more than 2.97 times that of the model trained with data of S1 (dB) and tested with data of S2 (dB).

Note: here the positioning error is the horizonal positioning error (meters) at CDF=90%.

 

6.4.2.3      Fine-tuning

...

6.4.2.4      Model-input Size Reduction

Observations:

Direct AI/ML positioning

...

AI/ML assisted positioning

For AI/ML assisted positioning, the positioning accuracy at model inference is affected by the type of model input.  Evaluation results show that if changing model input type while holding other parameters (e.g., Nt, N't, Nport, N'TRP) the same,

·        The positioning error of PDP as model input is 1.17 ~ 1.63 times the positioning error of CIR as model input.

·        The positioning error of DP as model input is 1.33 ~ 2.01 times the positioning error of CIR as model input.

For AI/ML assisted positioning, with Nt consecutive time domain samples used as model input, evaluation results show that when CIR or PDP are used as model input, using different Nt while holding other parameters the same, 

...

Both direct AI/ML positioning and AI/ML assisted positioning

Evaluation of TRP reduction for both direct AI/ML positioning and AI/ML assisted positioning shows that: identification of the active TRPs is beneficial for Approach 2-B. Otherwise, the model suffers from poor performance in terms of positioning accuracy.

For example, evaluation results from 4 sources show that the horizontal positioning accuracy is greater than 10 m if TRP identification is not included as model input.

6.4.2.5      Non-ideal label(s)

Observations:

Direct AI/ML positioning

Evaluation shows that direct AI/ML positioning is robust to certain label error based on evaluation results of L in the range of (0, 5) meter. The exact range of label error that can be tolerated depends on the positioning accuracy requirement, where tighter positioning accuracy requirement demands smaller label error.

For AI/ML based positioning, evaluation results show that semi-supervised learning is helpful for improving the positioning accuracy when the same amount of ideal labelled data is used for supervised learning, and the number of ideal labelled data is limited.

Regarding ground truth label generation for AI/ML based positioning, multiple sources submitted evaluation results on the impact of ground truth label for training obtained by existing NR RAT-dependent positioning methods. Feasibility and performance benefit of utilizing ground truth label for training estimated by existing NR RAT-dependent positioning methods are observed.

-        Source 1 evaluated in InF-DH {40%, 2, 2} and showed that AI/ML model can be trained with noisy labels along with the corresponding quality estimated by the legacy positioning methods, to improve positioning performance from 3.73m@90% (5k ideal label) to 1.72m @90% (5k ideal label + 20k noisy label). It also showed that the performance benefit compared to semi-supervised training of 2.78m @90% (5k ideal label + 20k unlabeled data). Note that training data weighting is used with label quality indicator.

-        Source 2 evaluated in InF-DH {60%, 6, 2} and showed that the performance of direct AI/ML positioning with 1k clean labelled samples improves from 13.76m to 8.72m when considering additional 350 samples that are labelled using NR-RAT positioning method. Note that the label error is up to 3.5m.

-        Source 3 evaluated in both InF-DH {60%, 6, 2} and InF-DH {40%, 2, 2} and showed performance loss when compared to all ideal label case. For example it showed in InF-DH {40%, 2, 2} the accuracy degrades from 0.39m @90% (100% ideal label) to 2.10m @90% (50% ideal label and 50% label obtained by existing DL-TDOA scheme). Note that noisy label is treated the same as ideal label in training.

...

AI/ML assisted positioning

...

Other

For AI/ML based positioning, evaluation results show that semi-supervised learning is helpful for improving the positioning accuracy when the same amount of ideal labelled data is used for supervised learning, and the number of ideal labelled data is limited.

Regarding ground truth label generation for AI/ML based positioning, multiple sources submitted evaluation results on the impact of ground truth label for training obtained by existing NR RAT-dependent positioning methods. Feasibility and performance benefit of utilizing ground truth label for training estimated by existing NR RAT-dependent positioning methods are observed.

-        Source 1 evaluated in InF-DH {40%, 2, 2} and showed that AI/ML model can be trained with noisy labels along with the corresponding quality estimated by the legacy positioning methods, to improve positioning performance from 3.73m@90% (5k ideal label) to 1.72m @90% (5k ideal label + 20k noisy label). It also showed that the performance benefit compared to semi-supervised training of 2.78m @90% (5k ideal label + 20k unlabeled data). Note that training data weighting is used with label quality indicator.

-        Source 2 evaluated in InF-DH {60%, 6, 2} and showed that the performance of direct AI/ML positioning with 1k clean labelled samples improves from 13.76m to 8.72m when considering additional 350 samples that are labelled using NR-RAT positioning method. Note that the label error is up to 3.5m.

-        Source 3 evaluated in both InF-DH {60%, 6, 2} and InF-DH {40%, 2, 2} and showed performance loss when compared to all ideal label case. For example it showed in InF-DH {40%, 2, 2} the accuracy degrades from 0.39m @90% (100% ideal label) to 2.10m @90% (50% ideal label and 50% label obtained by existing DL-TDOA scheme). Note that noisy label is treated the same as ideal label in training.

 

<Unchanged text is omitted>

=======================  End of text proposal to TR 38.843 v1.2.0 ====================

 

Agreement

Adopt the text proposal to TR 38.843 to improve the description of model generalization.

======================= Start of text proposal to TR 38.843 v1.2.0 ====================

6.4             Positioning accuracy enhancements

6.4.1         Evaluation assumptions, methodology and KPIs

<Unchanged text is omitted>

Model generalization:

To investigate the model generalization capability, at least the following aspect(s) are considered for the evaluation for AI/ML based positioning:

-      Different drops: Training dataset from drops {A0, A1,…, AN-1}, test dataset from unseen drop(s) (i.e., different drop(s) than any in {A0, A1,…, AN-1}). Here N≥1.

...

-      Other aspects are not excluded.

-      Companies can evaluate the impact of at least tThe following issues related to measurements on the positioning accuracy of the AI/ML model. The simulation assumptions reflecting these issues are up to companies.

·        SNR mismatch (i.e., SNR when training data are collected is different from SNR when model inference is performed).

·        Time varying changes (e.g., mobility of clutter objects in the environment)

·        Channel estimation error

 

For both direct AI/ML approach and AI/ML assisted approach, for a given AI/ML model design (e.g., input, output, single-TRP vs multi-TRP), identify the generalization aspects where model fine-tuning/mixed training dataset/model switching is necessary.

<Unchanged text is omitted>

=======================  End of text proposal to TR 38.843 v1.2.0 ====================

 

Agreement

Adopt the text proposal to TR 38.843 to improve the description of labelling error.

======================= Start of text proposal to TR 38.843 v1.2.0 ====================

6.4.1         Evaluation assumptions, methodology and KPIs

<Unchanged text is omitted>

Labels:

The performance impact from availability of the ground truth labels (i.e., some training data may not have ground truth labels) is to be studied. The learning algorithm (e.g., supervised learning, semi-supervised learning, unsupervised learning) is to be reported by participating companies and, when providing evaluation results, data labelling details need to be described, including:

-      Meaning of the label (e.g., UE coordinates; binary identifier of LOS/NLOS; ToA)

-      Percentage of training data without label, if incomplete labelling is considered in the evaluation

-      Imperfection of the ground truth labels, if any

Whether, and if so how, an entity can be used to obtain ground truth label and/or other training data is to be studied.

 

For direct AI/ML positioning, the impact of labelling error to positioning accuracy is studied considering:

-      The ground truth label error in each dimension of x-axis and y-axis can be modelled as a truncated Gaussian distribution with zero mean and standard deviation of L meters, with truncation of the distribution to the [-2*L, 2*L] range. Value L is up to sources.

-      [Whether/how to study the impact of labelling error to label-based model monitoring methods]

-      [Whether/how to study the impact of labelling error for AI/ML assisted positioning.]

For AI/ML assisted positioning with TOA as model output, study the impact of labelling error to TOA accuracy and/or positioning accuracy.

-      The ground truth label error of TOA is calculated based on location error. The location error in each dimension of x-axis and y-axis can be modelled as a truncated Gaussian distribution with zero mean and standard deviation of L meters, with truncation of the distribution to the [-2*L, 2*L] range.

-      Value L is up to sources.

-      Other models of labelling error are not precluded

-      Other timing information, e.g., RSTD, as model output is not precluded.

For AI/ML assisted positioning with LOS/NLOS indicator as model output, study the impact of labelling error to LOS/NLOS indicator accuracy and/or positioning accuracy.

-      The ground truth label error of LOS/NLOS indicator can be modelled as m% LOS label error and n% NLOS label error. -         Value m and n are up to sources.

·        m%=FN/NLOS is false negative rate of the training data label, where FN (False Negative) is the number of actual LOS links which are incorrectly labelled as NLOS, and NLOS is the total number of actual LOS links;

·        n%=FP/NNLOS is the false positive rate of the training data label, FP (False Positive) is the number of actual NLOS links which are incorrectly labelled as LOS, and NNLOS is the total number of actual NLOS links.

-      Companies consider at least hard-value LOS/NLOS indicator as model output.

<Unchanged text is omitted>

6.4.2.5      Non-ideal label(s)

<Unchanged text is omitted>

AI/ML assisted positioning

Evaluations show that AI/ML assisted positioning with timing information (e.g., ToA) as model output is robust to certain label error based on evaluation results of L in the range of (0, 5) meter. The exact range of label error that can be tolerated depends on the positioning accuracy requirement, where tighter positioning accuracy requirement demands smaller label error.

Based on evaluation results from 3 sources, for AI/ML assisted positioning where the model output includes the LOS/NLOS indicator, when the model is trained with dataset containing random LOS/NLOS label error, the models have no or minor degradation for LOS/NLOS identification accuracy up to at least m%=20% and at least n%=20%. When the training dataset has up to m%=20% and n%=20%, evaluation results show that the LOS/NLOS identification accuracy is PlablErr = PnoLablErr – d (percentage), where d is in the range of (-1.2%~3.1%).

·        PnoLablErr (percentage) is the LOS/NLOS identification accuracy when m%=0% and n%=0%;

·        m%=FN/NLOS is false negative rate of the training data label, where FN (False Negative) is the number of actual LOS links which are incorrectly labelled as NLOS, and NLOS is the total number of actual LOS links;

n%=FP/NNLOS is the false positive rate of the training data label, FP (False Positive) is the number of actual NLOS links which are incorrectly labelled as LOS, and NNLOS is the total number of actual NLOS links.

<Unchanged text is omitted>

=======================  End of text proposal to TR 38.843 v1.2.0 ====================

 

Agreement

Adopt the text proposal to TR 38.843 to improve the description on evaluation assumption and methodology.

======================= Start of text proposal to TR 38.843 v1.2.0 ====================

6.4             Positioning accuracy enhancements

6.4.1         Evaluation assumptions, methodology and KPIs

<Unchanged text is omitted>

Model generalization:

To investigate the model generalization capability, at least the following aspect(s) are considered for the evaluation for AI/ML based positioning:

...

6  -  InF scenarios, e.g., training dataset from one InF scenario (e.g., InF-DH), test dataset from a different InF scenario (e.g., InF-HH)

-      If an InF scenario different from InF-DH is evaluated for the model generalization capability, the selected parameters (e.g., clutter parameters) are compliant with TR 38.901 Table 7.2-4 (Evaluation parameters for InF). Note: In TR 38.857 Table 6.1-1 (Parameters common to InF scenarios), InF-SH scenario uses the clutter parameter {20%, 2m, 10m} which is compliant with TR 38.901.

...

Evaluation assumptions:

The IIoT indoor factory (InF) scenario is a prioritized scenario for evaluation of AI/ML based positioning. Specifically, InF-DH sub-scenario is prioritized for FR1 and FR2.

Reuse the common scenario parameters defined in Table 6-1 of TR 38.857. For evaluation of InF-DH scenario, the parameters are modified from TR 38.857 Table 6.1-1 as shown in Table 6-56.4.1-1. The parameters in the table are applicable to InF-DH at least. If other InF sub-scenario is prioritized evaluated in addition to InF-DH, some parameters in Table 6-5 may be updated:. If an InF scenario different from InF-DH is evaluated for the model generalization capability, the selected parameters (e.g., clutter parameters) are compliant with TR 38.901 Table 7.2-4 (Evaluation parameters for InF). Note: In TR 38.857 Table 6.1-1 (Parameters common to InF scenarios), InF-SH scenario uses the clutter parameter {20%, 2m, 10m} which is compliant with TR 38.901.

Table 6.4.1-1: Parameters common to InF scenario (Modified from TR 38.857 Table 6.1-1) for AI/ML based positioning evaluations

<Unchanged text is omitted>

 

When single-TRP construction is used for the AI/ML model, companies report at least the AI/ML complexity (Model complexity, Computation complexity) for N TRPs, which are used to determine the position of a target UE considering the various constructions in Table 6-66.4.1-2 below.

Table 6.4.1-2: Model complexity and computational complexity to support N TRPs for a target UE

<Unchanged text is omitted>

=======================  End of text proposal to TR 38.843 v1.2.0 ====================

 

Agreement

Capture in TR 38.843 the model inference complexity figure for the positioning use case, which shows the (a) model complexity in number of real parameters (millions) and (b) computational complexity in FLOPs (millions).

======================= Start of text proposal to TR 38.843 v1.2.0 ====================

6.4.2         Performance results

<Unchanged text is omitted>

Model monitoring

...

For both direct AI/ML and AI/ML assisted positioning, evaluation results have been provided by sources to demonstrate the feasibility of label-free model monitoring methods.

Model complexity and computational complexity

For AI/ML based positioning method, companies have submitted evaluation results to show that for their evaluated cases, for a given company’s model design, a lower complexity (model complexity and computational complexity) model can still achieve acceptable positioning accuracy (e.g., <1m), albeit degraded, when compared to a model with higher AI/ML complexity.

 

In Figure 6.4.2-1 below, the model inference complexity reported by companies for the positioning use case is shown, including (a) on the x-axis: model complexity in number of real parameters (millions) and (b) on the y-axis: computational complexity in FLOPs (millions). Figure 6.4.2-1 shows the range of complexity for the following schemes: (1) direct positioning; (2) assisted positioning with multi-TRP; (3) assisted positioning with single-TRP and one-model for N TRPs; and (4) assisted positioning with single-TRP and N models for N TRPs. For details of the complexity values corresponding to Figure 6.4.2-1, please see POS_Table 1.

For the three schemes of AI/ML assisted positioning, the complexity is calculated according to Table 6.4.1-2. Both model complexity and computational complexity values are as reported by participating companies. There is no effort to align the procedure across companies on how the complexity values are obtained. In addition, optimizing AI/ML complexity (i.e., model complexity and computational complexity) is out of scope of the study item.

A graph with different colored circles and numbers

Description automatically generated

Figure 6.4.2-1. Model complexity and computational complexity for four schemes of AI/ML based positioning.

<Unchanged text is omitted>

=======================  End of text proposal to TR 38.843 v1.2.0 ====================

 

Agreement

Adopt the text proposal for additional high-level summary of evaluations of AI/ML based positioning in the study item.

======================= Start of text proposal to TR 38.843 v1.2.0 ====================

6.4.2.6      Summary of Performance Results for Positioning accuracy enhancements

Editor’s note: Section for FL to summarize the evaluations.

 

For the use case of positioning accuracy enhancement, extensive evaluations have been carried out. Both direct AI/ML positioning and AI/ML assited positioning are evaluated using one-sided model. The following areas are investigated.

...

-        AI/ML complexity. For a given company’s model design, in terms of model inference complexity (model complexity and computational complexity), a lower complexity model can still achieve acceptable positioning accuracy (e.g., <1m), albeit degraded, when compared to a higher complexity model.

-        Generalization study. Evaluations are carried out to investigate various generalization aspects, where the AI/ML model is trained with dataset of one deployment scenario, while tested with dataset of a different deployment scenario. The generalization aspects include: different drops; different clutter parameters; different InF scenarios; network synchronization error; UE/gNB RX and TX timing error; SNR mismatch; channel estimation error; time varying changes.

 

Methods are evaluated which have been shown to be able to handle generalization issues, including:

o    Better training dataset construction (i.e., mixed dataset), where the training dataset is composed of data from multiple deployment scenarios, which include data from the same deployment scenario as the test dataset.  

o    Fine-tuning/re-training, where the model is re-trained/fine-tuned with a dataset from the same deployment scenario as the test dataset. The impact of the amount of fine-tuning data on the positioning accuracy of the fine-tuned model is evaluated for the various generalization aspects. Evaluation results are obtained for two experiments:

       The AI/ML model is (a) previously trained for scenario A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for scenario B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under scenario B. The horizontal positioning accuracy at CDF=90% is E meters.

       The AI/ML model is (a) previously trained for scenario A with a dataset of sample density N (#samples/m2), (b) followed by fine-tuning for scenario B with a dataset of sample density x% ´ N (#samples/m2), (c) then tested under scenario A. The horizontal positioning accuracy at CDF=90% is E meters.

<Unchanged text is omitted>

=======================  End of text proposal to TR 38.843 v1.2.0 ====================

 

 

R1-2312515         Summary #4 on Remaining Aspects of Evaluating AI/ML for Positioning Accuracy Enhancement           Moderator (Ericsson)

From Thursday session

Agreement

Adopt the text proposal for additional high-level summary of evaluations of AI/ML based positioning in the study item.

======================= Start of text proposal to TR 38.843 v1.2.0 ====================

6.4.2.6      Summary of Performance Results for Positioning accuracy enhancements

...

Based on RAN1 evaluations of AI/ML based positioning,

-       

-        If AI/ML based positioning is considered for normative work, it is desired to further investigate model input design aspects: the model input type (e.g., CIR, PDP, DP), dimension (e.g., parameters N'TRP, Nt, N't, Nport) and related format (e.g., for the timing information: absolute time or relative time) considering the tradeoff of positioning accuracy, signaling overhead, and AI/ML complexity.

<Unchanged text is omitted>

=======================  End of text proposal to TR 38.843 v1.2.0 ====================

 

======================= Start of text proposal to TR 38.843 v1.2.0 ====================

6.4.2.6      Summary of Performance Results for Positioning accuracy enhancements

Editor’s note: Section for FL to summarize the evaluations.

...

Based on RAN1 evaluations of AI/ML based positioning,

-        It is beneficial to support both direct AI/ML and AI/ML assisted positioning approaches since they can significantly improve the positioning accuracy compared to existing RAT-dependent positioning methods in the evaluated indoor factory scenarios.

-        Both UE-side model and NW-side model can significantly improve the positioning accuracy compared to existing RAT-dependent positioning methods.

-        It is desired to apply methods to handle generalization aspects.

-        It is desired to consider training data collection requirements.

<Unchanged text is omitted>

=======================  End of text proposal to TR 38.843 v1.2.0 ====================

 

 

Final summary in R1-2312516.

 

===========================================================================================

R1-2312425         FL summary #1 on remaining open aspects of AI/ML positioning          Moderator (vivo)

From Wednesday session

Agreement

It is recommended to specify necessary measurement, signaling and procedure to facilitate training, inference, monitoring and/or other LCM operations for both direct AI/ML positioning and AI/ML assisted positioning

·       specify necessary signaling of data collection; investigate the necessity of other information for supporting data collection, and if needed, specify during normative work

·       investigate on the necessity and signaling details of measurement enhancements, and if needed, specify during normative work

·       investigate on the necessity and signaling details of monitoring method(s), and if needed, specify during normative work